Latent Space
December 28, 2025

One Year of MCP — with David Soria Parria and AAIF leads from OpenAI, Goose, Linux Foundation

The Agentic Handshake: Why MCP is the New TCP/IP for AI

By Latent Space

Date: October 2023

Quick Insight: This summary is for builders tired of manual tool-calling and researchers tracking the transition from chat interfaces to autonomous agentic ecosystems. It explains how the Model Context Protocol (MCP) is moving from a local experiment to a global standard for AI connectivity.

This episode answers:

  • Why did Anthropic and OpenAI stop competing to build a shared foundation?
  • How do long-running tasks solve the friction of constant user approval?
  • What makes MCP superior to standard REST APIs for LLM integration?

David Soria Parria and leaders from OpenAI and the Linux Foundation discuss the transition of MCP to a neutral open-source home. This move signals the end of the walled garden period for AI tools and the beginning of a standardized agentic stack. The collaboration between fierce competitors highlights the urgent need for a universal communication layer in the AI industry.

Top 3 Ideas

PROTOCOL OVER PROPRIETARY

"AI years are kind of like dog years."
  • Rapid Spec Evolution: MCP released four major updates in twelve months. Builders must adapt to a protocol that moves at the speed of inference rather than traditional multi-year standards.
  • Prescriptive Authentication: Unlike REST, MCP mandates specific layers for remote servers. This allows enterprises to plug agents into existing identity providers without custom glue code.
  • Universal Connectivity: MCP acts as a translator between models and external data. It prevents the fragmentation of AI by ensuring tools work across Claude and ChatGPT.

THE INTELLIGENT REGISTRY

"The model knows what it wants."
  • Neutral Governance: The Linux Foundation now manages MCP and Goose. This guarantees the protocol remains open and prevents model labs from making a proprietary pivot on the ecosystem.
  • Progressive Discovery: Models can now browse registries to find the tools they need. This reduces context window bloat by only loading relevant documentation for the task at hand.

ASYNC AGENT AUTONOMY

"I want to see how agents become asynchronous."
  • Long-Running Tasks: New primitives allow agents to perform research that lasts hours or days. This moves AI from a waiting for user input loop to a background worker that delivers finished results.
  • Visual Components: MCP UI allows servers to send raw HTML to clients. Users get rich interfaces like flight seat maps instead of clunky text-based lists.

Actionable Takeaways

  • The Macro Evolution: Standardized communication layers are replacing custom API integrations. This commoditizes the connector market and moves value to the models that best utilize these tools.
  • The Tactical Edge: Standardize your internal data tools using MCP servers today. This ensures your company is ready for autonomous agents that can discover and use your resources without manual API integration.
  • The Bottom Line: The agentic stack is consolidating around MCP. Interoperability is no longer a feature; it is the foundation for the next decade of AI utility.

Podcast Link: Click here to listen

Hey everyone, welcome to the L in space podcast. This is Allesio, founder of Colonel Labs and I'm joined by Swixs, editor of L in Space.

Swixs: Hey, and here we are joined finally in the studio for the first time. Uh, welcome back David from Enthropic/MCP.

David Soria Parria: Yeah. Hey. Well, what nice to finally talk to you in person then like last time like a year ago it was over VC and this is way fun. I watched it back those eight months. Uh it's it's been a crazy eight months and uh I think we just celebrated like the one year anniversary of MCP. Yes, we have at least a public announcement. Um and also last night or yesterday was the agent AI foundation launch. Yeah, that was nice. Was a nice event. It was nice to see the Enthropic office and I've been you like yeah very good food I would say. Um in terms of my food bench, Enthropic does rank over OpenAI. Yeah, [laughter] it's a at least that that that's what we have going for us. Awesome, man. Do you want to give just a quick overview of what's happening with MCP and how you're donating it to the foundation and then we'll do kind of like a one-year recap of the protocol itself and then we'll have the rest of the leads from the foundation join us to do more of the high level. Yeah, yeah, that sounds good.

David Soria Parria: Um, yeah, I mean the where where we at at the moment, right? We have done like a year like a year ago we launched it and then we had this like crazy adoption over the last year now which it felt like an eternity honestly but we had this like crazy um growth and like adoption um you know through initially through like Thanksgiving and Christmas very early with a lot of like builders building MCP and then you know you had like the first big clients coming in like cursor and VS Code and then like you had this like inflection point around April with like Sam Oldman and uh Satya and um Sund and all um posting about like MCP and that they're going to adopt MCP at Microsoft, at Google, um at OpenAI and that was really like the big inflection point. So yeah. Um but in all of that time, you also had to do a lot of work on the protocol itself, right? We like we we moved we launched originally as like basically local only. Um you could like build local MCP servers for cloud desktop. Um but then we like in March this year we moved into like how can you do remote MCP server so connect like really about like to a remote server and introduced like the first iteration of authentication and then in June we revisited that and like improved it quite a little bit so that it works better for um you know uh for enterprises in particularly and we were very very lucky that in that time from March like June we were like able to like have absolute industry leading experts that literally work on Oath itself to help us with some of the the pieces, right? um and how to get it right.

David Soria Parria: And then we focused a lot of on like security best practices and this type of work. And now we like have I feel we have a really solid foundation and we're doing we just launched like in end of like November the recent iteration of the protocol finally like the next bigger improvement to the protocol which is like longunning tasks to really allow for like you know deep research type of task and like you know maybe even agent to agent communication. And so I think we're just stepping into like this territory now with like okay we have really solid foundations. We have like one more big primitive we want to have. We want to make like a little bit more scalability things work and then we're you know going to get into a phase where it probably becomes a bit more stable. And so yeah it's been it's been an absolute crazy year man.

Swixs: You did say the agent to agent so there is a A2A protocol. Um I'm curious when the Agent Engineering Foundation got formed or just Agent AI foundation was there any discussion about any of these other protocols being a part of it or you know Sean wrote a post called YMCP1 um already. So uh one of my favorite post maybe it was already it was and it was before Sam and all the other guys. Yeah. Yeah. You were right. Well I mean I I think it was just obvious that was going to happen. So yeah. Um so we we of course have conversations around like what is else is in the market like there are payment protocols that are interesting and so on but when we wanted to to start a foundation we wanted to make sure first of all two things we wanted to start small and make sure um that the group that is founding this like for for us it's the first time we have anthropic have an open source foundation so this is all new to us we really want to start it small and making sure we're learning along the ways and being able to like like shepherd this in in the way we feel is best for for the industry together with with OpenI um and block.

David Soria Parria: But the second part of that is also like we really felt like we wanted to see things that um have a lot of adoption or def factor like um at least on the protocol side like a def factor uh standards. Um and and I don't think any of the other protocols it feels like they're not just there yet. But of course if they get there then we're like super open as long as they're like complimentary to to what's what's in the foundation. On the application side, we're a little bit more flexible and we're like more open, but on the protocol side, I think we really want to make sure that we're not like offering like the foundation doesn't encompasses like five protocols for the same like communication layer and so yeah, there was discussion but I think for now we just want to start it small.

Swixs: Is there a role like a double hat that you have now with the foundation uh or are you more focused on MCP?

David Soria Parria: I am still mostly focused on MCP. It's a bit of a double hat. So there is this like I I think people need to understand like the foundation part is mostly just an umbrella to make sure the projects under it stay always neutral and I think that's really the most important part you want to get a lot um you know want to understand because the rest of it is like okay how do we use the budget of the foundation for events and things that are like quite dry and then the technical parts to like MCP they stay actually the same like on the on the way we govern MCP nothing has really changed and so that's really still my job as the lead core maintainer of like shephering um the processes shephering the the protocol forward and then beyond that now the additional double role is like I'm also going to be on the the technical steering committee of the foundation which will like make sure to like figure out what are the projects we want to have in the foundation so if someone comes with a project to us the people that have projects in it will decide is this something we would want is this something that we feel is like well-maintained has a lot of adoption is not going to go away. We want to make sure the foundation is have like super interesting and important projects and not like a dumping ground like have you know how some foundations might have ended up with.

Swixs: That's true. Uh so we're going to meet some of the others later but maybe we'll just focus back on the sort of MCP development. You covered a lot. There's been four spec releases. That's a lot. Yeah. Like some people may have missed some of them is what I'm saying right. like and I think it's really interesting how uh you we've continued to work on like really important parts like I I always think like it's very hard to follow up a a major success with a sequel cuz the sequel usually like is it's hard to repeat that uh impact but I think like every single time you've actually managed to like focus on something important. Yeah. So maybe we can cover I guess maybe we'll start with the the March May one uh which is HTTP streaming which is good um and the offspec right. any other I don't know if you want to highlight any others but we'll just catch people up on that stuff.

David Soria Parria: Yeah. So that that was I think that was really a that was such an important one like it was the number one requested thing. Yeah. It was like it really opened up this like remote thing and we were we already knew actually in December and November that the next big thing will be like how can you do this over the over remote and authentication is quite important. One of the things I think people very rarely notice when it comes to MCP. MCP is very um very prescriptive in like each layer like other protocols are not like that for example like we like you want to do authentication if the client and the server don't know each other you need to do all right and so we were very early wanted to have like one way to do something and so we really focused on like what does this mean like how do we get it over how do we build a protocol that is has this like these streaming properties that we require and then how do we do authentication very early authentication in the first iteration I think we did an okay job but we got some aspects wrong and most of them honestly were just me not understanding enterprises well enough but then again I think the the strengths that we have with MCP and I think the one thing if anything I'm proud of is like building a community of people that can come together and help me figure [ __ ] out because you know I have my set of experiences of like what I'm good at um and enterprise authentication turns out is not one of them right but there are way better suited people for that and so that's when we like that's March.

Swixs: I saw I saw you post that but I didn't really dig into the details. Was it like the the typical SL uh type of a authentication issue?

David Soria Parria: The main issue we did is um in in O there are two components. There is an there's the authentication server who gives you the token and then there's the resource server. It takes the token and gives you the resource in return. And in the first iteration of our authentication spec, we combined them together into the MCP server, which if you were building unusable, yeah, it's kind of usable if you build like an an MCP server like as like a like an public server for as a you know, you're a startup, you're building a server for yourself, you want to bind this to um the accounts you already have that is completely usable. The reality in enterprises is you don't authentic, you authenticize with some central entity like you know you know you have some ID identity provider or an IDP and you know yeah for most people they what they don't even notice that's happening. All they know is like, oh, in the morning I'm going to go log in with Google for and then get access to all my work stuff, right? But that's effectively the IDP, right? Um, and if you combine these into the same server, you just can't do this anymore. And so all we needed to do is like, okay, we are a resource server, the MCP server is a resource server, how you get the token from the authentication server. We have opinions on how you should do it, but it's kind of separated. And that's what happened then in the June spec where we separated this out um and worked through all of these like okay you know how do you do dynamic client registration other aspects which also were part of the March spec um we can talk about that that's a whole other story of like we actually pushing the boundaries of what Oth can do with with MCP because we're trying something very unique with MCP um but yeah that was that was the big part in in March which we like and that um was that authentication spec the first iteration then fixing it in June Yeah.

Swixs: What's the state of agents authenticating on my behalf? Because even today with the OAT, I still have to, you know, log into Linear and whatnot. Um, OAT is for the most part a very humanentric protocol. It's just it just tells you how you obtain a token if you don't have a token. Once you have a token, actually, it doesn't matter. You just put it into the bearer token. And so, we we're not very prescriptive of what like agentto agent um authentication would look like or behalf of agents. They are ideas that we're looking into and I don't have all the specifics but we're not prescriptive in the same way we're prescriptive as with but you can't technically as the moment you have a token that might be like bound to like a workload identity or something like that then you just can pass that still to the MCP server we're just not telling you how to obtain it just yet and so we're not prescriptive and so people do this and they can do it when particularly when they're within like an enterprise and have a somewhat closed ecosystem but if the client and the server don't know each We just don't have a good solution for now. Yep.

Swixs: And then yeah on the remote thing you went from local servers like SSC and then streamable HTTP. Any learnings you want to call out there? Any uh yeah regrets or uh learnings for others?

David Soria Parria: the man transport. The one discussion has never stopped on the very beginning of the last years about transport and we literally just spent the last two days at the Google offices with a bunch of like senior engineers from Google, Microsoft, AWS, Anthropic, OpenAI just like what do we need to do here to really really make this solid? Um when we look into March, we wanted to get a transport going that basically retains a lot of the properties we had from standard IO because we really and I still believe this until today that the that MCP should also enable agents and agents are inherently somewhat stateful and there's some form of like long-term communication going between like the client and the server. And so we always looked for something like that. We also knew that we looked into alternatives like okay what happens if we do websockets for example and we have found a lot of issues with doing a proper birectional stream and we like okay what is the right middle ground between having something that can be used in the simplest form that people do like where they just want to provide a tool but um then is able to be upgraded to like a full birectional stream if you need it because you really have like complex agents you know communicating with each other that's where streamable HTTP was born with that intent.

David Soria Parria: And I think there's something that in retrospective we got right and and something that we got wrong. I think we got right that we are really leaning just on on standard HTTP in that regard. We got wrong that we made a lot of things optional for the clients to do. Um like you can the client can connect and open this return stream from the server but it doesn't have to. And the reality is is no client does it because it's optional. And so a lot of the birectionality goes away. And so features like elicitations and sampling are just not available to servers because they don't have that stream open because the client the client implement like ah that's the minimum viable project from product for me I don't have to do it and so that that became an issue. So I think there are lessons there. The second part of the lesson is that the way we designed the protocol, the the transport protocol requires some form of holding state on the server side. And that is fine if you have one server, but the moment you scale this horizontally across multiple pods and a like in containers or something like that. Well, now if you get like true call and then elicitation and elicitation result, you somehow require to like you might hit two different servers and you need to find a way to have these two servers somehow get this result together and you effectively need some form of shared like reddis memcache whatever you want like some form usually pops up or something like that you want to to have like a shared state that you can like have and that's kind of okay and like we have seen this in PHP applications Python application being done but it it's it's not fun if you do this at scale and we know from like some companies like the Googles of the world the Microsoft of the world they're doing MCP at a scale that I can't tell you the numbers but it's like in the million of requests and so now it becomes a problem right and so now we're sitting here like okay how do you build an iteration off the protocol that allows for basically these principles of like make it as simple as possible for simple MTP servers but allow the full spectrum of like really birectional streaming if you need it, but also make it scalable.

David Soria Parria: And I think we're just allowed to find the right solutions, but it's it's just complicated. Yeah. Because a lot of the technology today is is really just there's very little like that. People either do the the simple thing and then you do like something like REST or you do like a full eye stream and then you're just going to do like websockets or like gRPC and so on and we need kind of both.

Swixs: What's it like to be in that kind of meeting where you have all these impressive companies and everyone is senior and everyone has an opinion and it's much fun. Yeah. I got to work with some of the best engineers in the in the industry like this is it's insane. Okay. Well, who decides? You know, usually usually there's we're trying to get to consensus like the reality is technically I decide in the end of the day but I think that's uh more like a formalism in the end of the day. What you're trying to do is just to really narrow down of like what's the what are the real problems which we all agree on what are the things where we ne not necessarily agree on and what are the you know and then within those bounds like build the best solution and it takes a while it takes a lot of iterations but it's it's so much honestly it's so much fun because you get to see these unique problems from from the companies you you see some of the identity of the companies in the problems themselves right like you know Google has a different set of problems like Microsoft and and and a lot of it comes from like just their the ways of building things and then the problem from anthropic look different from the problem from OpenAI but what I love about all of this is that everybody is that like sometimes you step back and like you sit in a room with all these competitive companies but you're actually building something together and I I love that I've been in open source for like 25 years. Yeah, it's very I love this kind of stuff. And when a standard works this is the ideal and these people are all amazing. I just learn from from all my peers so much. So I'm I'm very grateful to be in this situation.

Swixs: Yeah, this reminds me of the IETF standards process. Is there some discussion about how this works as a private group versus something more traditional?

David Soria Parria: It's an interesting one. Like it does look a little bit like the ITF. The ITF is very slightly different. The ITF is like an open forum where everybody can go and the result of that it's like the ITF is very consensus based and by accident not by like not necessarily because they want to be but by accident quite slow in the processes which is very good in many ways it cannot be undone right once it's once it's up it's yeah and like for example when you look at like the OS 2.1 spec it's been in the works for like 3 years or four years and they're just not done with it right and that's like that's the length of which ITFs standardization works like these things can take a long long time and I think that's good for certain pieces but I think in AI at the moment is just so fast moving you just you are somewhat forced to find a smaller group and so that's why we run MCP as like a really traditional open source um project with like a core maintainer group of like eight people that basically decide everything and then like input from everybody else like we get input and people can make suggestions and we have a lot of the changes don't come from the core maintainers but they are the ones that decide it and that's way more. It's like a middle ground of being somewhat consensus based but also somewhat like a bit of a dictatorship which can be good if you want to move fast which MCP wants to do at the moment.

Swixs: How do you balance the influence of like the model improvements with how to shape the protocol because obviously you know you have an entropic and openi you guys are doing post training on these models to make them better tool calling and you have preferences on the shape of the protocol versus there's people that are not aware of like how you're structuring that. So yeah, do you like share some of these? Like does the protocol influence some of the model post training or like vice versa?

David Soria Parria: Maybe I'm not 100% familiar like I'm I'm a product person. I'm not fully familiar with everything we do on the research side for sure. But it influences the post training in the sense that we're making use of things like the MCP atlas that we're like having in our model card of like making sure that like we we're taking this large set of tools in the wild and make sure um that our models work with that. But I think the primitives of the protocol, they're actually very rarely influenced by model improvements. I think there's a sense that that we do anticipate the exponential that the models are on in terms of like improvement and that we're relying to some degree of mechanics that you can put into into the model training. I'm going to get more concrete here. So, for example, people have had long conversations around context build of MCP servers. Um, and that happens because MCP opens up the door to a lot of tools. And if you naively take all the tools and throw them into the context window, you you just get a lot of bloat. It would be just the equivalent if you take all the skills, take all the markdown files and just throw them all into the context, you would also have a lot of bloat. Um, but we already knew and and I think we always knew that we that you that you can do something like progressive discovery and that's like a general principle thing of like you can give the model some information and let the model then decide to and gain more information, right? And of course here is where we're like you know some of the foresight that we see because we we we are the big model companies. We know that we can train this if we wanted to and what the training does is just optimizes it. the model can do it in principle already, right? And you can any model can do it that does any type of tool calling, but if you train the model for it, it's just better at it, right? And so these things then go hand in hand in a way. But in the end of the day, the the general mechanic of progressive discovery or this this Yeah. progressive discovery that's just inherent to to any type of model that can do any type of tool calling in the end of the day. That makes sense. Yep. Yeah. And I think the Yeah. The context rot point is important. And I think down there's the MCP versus code mode thing and then it's like well if Entropic says code mode and Entropic made MCP maybe. Is that the best way?

David Soria Parria: So the blog post never actually called it code mode. That's the call fair. That's it. But yeah, but like people call it we call a programmatic MCP and other call it code but in the end of the day what it boils down to is just like okay and here's the interesting part like the so first of all MCP is a is a protocol between the I application and like servers right? So the model is actually technically not involved in MCP. Um and so now you have an application go like I have a bunch of tools what can I do with it? And you can do the naive thing and go like okay I have tools uh I'll throw them into tools for the model and I I call them but you can be more creative with it. You can go and like okay models are really good at writing code. What if I take this and treat it like just like API calls and you give it to the model and now the model generates you know code and what you effectively doing is this composability that the the model would have done anyway by like call tool A you know get the result go back to inference to call B and then combine it into call three. Now you all you've done is you let the model optimize it in advance and put them into a bunch of code that is just executed in a sandbox and go like call one put it into two put the results into three get a result and all you've done is an optimization in the end of the day um but the the benefits of MCP of having authentication done for you having um something that is that is suited for the LM something that is automatically that is discoverable and self-documenting this thing has not gone away that's still MCP for you right you're just using it the way. So I'm always a little bit confused when people go like but MCPs why why does it tell me that that that does not that does it mean MCPS uses no it's still it's just a different use right and I think you will see evolutions as we're getting better of like how we use these models and the infrastructure around it gets a bit more mature and you suddenly can assume that most model like AI applications will have some form of like sandboxing for execution you can do a lot more fun stuff like that but I don't think that the value of like an a protocol that connects the model to the outside world is is gone because of it. That makes sense. I see it purely as an optimization honestly as a token optimization.

Swixs: This is a good time to bring up skills is always [laughter] awesome. So we skills is a more recent concept. Yeah. Uh we I only bring it up because it's mentally linked in my mind to progressive disclosure and to adding preset code scripts and all that. Uh skills can also create skills which is very fun. Yeah. Well, I think a lot of people are trying to place MCP versus skills. Obviously, they're not overlapping, but how do you view it?

David Soria Parria: Yeah, I agree. I I think that's the interesting part is like they're not overlapping. I think they they solve different things. I think skills are super great and you know they're I think that the first that really like they've been built from the principle is progressive discovery. But I think the mechanism of progressive discovery that's just universal to any type of thing you can do with a model. But what skills do they like they give you the domain knowledge for like a specific set of task like how you are how you behave how should the model behave as a data scientist or how should the model this um behave as I know an accountant or whatever but MCP gives you the connectiveness of the actual actions that you can take with the outside world and so I think they're somewhat um orthogonal in like in terms of like the skills really gives you this domain knowledge is like kind of vertical and then like MCP gives you this horiz horizontal of like okay you know give me that one action and of course skills can take actions they can take actions because you can have code and scripts in there and that's great but it has two interesting aspects that I think people got the first one is you need an execution environment so you need to you need to machine yes and that that's that's perfectly fine for you know if you like run a local like you know cloud code or something then we can talk about like CLI for example in those scenarios where you have like an execution environment these things make a lot of sense and and um and then it's great um or if you have a remote execution environment then it make a lot of sense but you still don't get authentication in that regard and so what I think MCP brings is the authentication piece it brings the piece that you don't have to the this like an external like person like for example if you have like a linear MTP server they can improve the server you don't have to deal with that in your skill right it's not fixed in space and then the third part is that you don't necessarily need an execution environment because the execution environment is effectively somewhere else on the server Um and so if you build a web application or like um like a mobile application you these things come you know work better in in some of these regards. So I think they are orthogonal in that regard for the most part and they are and I've seen some quite cool deployments where people use skills to explore of like different functions different like you know the accountant the engineer the the data scientist and then use MCP servers to connect um these skills to the actual like data sources within the company and I think that's actually a really fun model and I think that's the closest how I think about this.

Swixs: Yeah. So MCP is the connectivity layer. I think is the word that you choose. The communication layer. Communication layer. Yeah. So is is it um architecturally? I'm I'm wondering if it's like the MCP client inside of each skill or is there a shared client that can discover skills?

David Soria Parria: We do that as shared. We do that as a shared one. I think you technically want a bit more shared ones because you do the more shared you have the better the more you can do like discovery things. You can do things like okay I have connection pooling I have I can do automatic discovery of things I can even like you know in a skill you might just very loosely describe what you want and I can look into the registry that I have access to and get an MCP server for you right these things can you can do when you do but I think both works in the end of the day yeah but this is things to experiment with uh I do want to highlight for people who might have missed it you say we do blah blah blah actually I think nobody understands uh enough how much anth ropic dog foods MCP. And I only understood this when I watched John Welsh do his talk at the where he was like, "Yeah, we have MCP gateway. Everything goes through this." Yeah. And like, what can you say more about that?

David Soria Parria: Yeah. I mean, like, you know, we we use both, right? We use a lot of like we use a lot of skills internally. We use a lot of MCP servers internally because like we have, you know, obviously, you know, you want to make it very easy for people to deploy MCP. You want to like have some form of integration with your IDPS and so on. So we have a gateway that we've built custom purpose for ourselves and you just got to like deploy your MCP servers. Um and it's all internal apps. Uh it's all internal stuff. Yeah. Yeah. Some of them are like external things like like technically external things but in the lack of them offering a first party one. We have our own like we have a Slack MCP server which I love to use that have claw like summarize my Slack for me. And so there there's quite a lot of usage for that. Like we we even have like an MCP server like we're doing like a semi a bianual like survey for example around like how we how we feel like about the company about the future about AI about safety these type of things and we have an MCP server for that and they can ask lot questions around the results which is really fun.

Swixs: Is it your team maintaining it?

David Soria Parria: Uh no we maintain a gateway but like I think one of the fun part is like when we started MCP it was always like MCP before we even open source it it was born out of the idea of like I'm in a company that is growing crazy. I'm in the development side of things, the development tooling side of things. I will grow slower than the rest. How can I build something that they can all build for themselves? And that's really the origin story of MCP. And so it's fun to see a year later like that's what's actually going on is like people build MCP servers for themselves. I probably don't even know 90% of the MCP servers that are on topic because you know they might be in research and I might not even see them or I just don't know because people build for themselves.

Swixs: So but do they do they host it themselves? Is there a remote?

David Soria Parria: They effectively have a command to launch it and it just launches in like a in a cubes cluster for them. So it's like partially manage. Yeah, that's good infra for anyone at a large company to to build any platform infra and some there are platforms that offer that to you for us from security perspective. we want to build these ourselves. Yeah, but like they're like um the person who built a um fast MCP Jeremiah like a company that offers like fast MCP cloud which is a little bit like that. you just like two commands and you have a running instance of an MCP server that does talk stream over HTTP and then a lot of inter like a lot of enterprises use things like light lm as a gateway and then they can even do like just launch standard IO servers attach them to the the gateway and the gateway does all the authentication all the hard parts of MCP for them and so there's a lot of ways to do this but that's good infrastructure you really want to have is just like make it trivial make it like one command to just launch an MCP server that was a standard IO server and suddenly it's a stream HTTP server with authentication integrated and you as an end developer only had to do the standard part.

Swixs: Yeah, I love calling that stock out because people will take that and actually put this into their companies. So um yeah, otherwise also the alternative is chaos reinventing everything. Shout out to Jeremia actually I did invite him to do a workshop on fast MCP at uh my New York summit. here recently a very great blog post about like a lot of the usage of MCP we're actually seeing is internal in companies and that's actually what we see at the moment too which is really cool like in what companies internally in companies in big enterprises you see MCP everywhere and it's it's actually way growing way faster than you would think because it's mostly internal to companies and without people seeing it about discovery so you launched a registry there were registry companies there were gateway companies the official registry now has auto registries putting their own MCP in your official registry. You need more registries, man. I mean, [laughter] just one more, bro. One more. Just Yeah. What what's the registry to rule them all? Any any learning from that? Like launching a registry for like a new technology and like whether or not you know people like you know smithy is one example, right? If you go on the official registry like all these smithy AI MTPs that you need to authenticate through them. So, it's kind of like just a pass through registry in a way. How do you see how is this going to shake out?

David Soria Parria: I think we saw a lot of these like different registries come up and we really felt that there is a need for basically like an MPM pi kind of approach to this where like there's one more central entity that that that is the the where everybody can publish an MCP server too. And that's really where the original registry came from. And we really wanted to make sure that at least we're encouraging the ecosystem to have a common standard of what these registries can talk to because what we want to do, we want to live in a world where a model can auto select um an MCP server from a registry, install it and then for the given task that you have at hand and then you just use it, right? It should kind of feel magic, but for that you need like some form of standardized interface. And so we're gonna do and that was really the the inflection point of like we started quite early working with the GitHub folks even in April um and then I got distracted [laughter] with other things. So like authentication um and worked on that. And so what I want to see and I think where we slow

Others You May Like