The Rollup
February 6, 2026

The AI Privacy Problem No One's Talking About in AI with George Zeng

How Confidential AI Agents Solve the Privacy Paradox for Builders

by The Rollup

Date: October 2023

AI agents are here, but their power comes with significant privacy and security risks. This summary cuts through the noise, showing how confidential compute and decentralized approaches offer a path to truly user-owned, secure, and cost-effective AI.

  • 💡 Why are current AI agent setups inherently risky for personal and business data?
  • 💡 How can AI agents retain long-term memory across different platforms without losing context or security?
  • 💡 What role do crypto wallets and decentralized networks play in the future "agentic economy"?

The AI agent revolution is here, but it brings a fundamental tension: how do we give these powerful digital assistants access to our lives without sacrificing privacy or security? George Zeng, Chief Product Officer and General Manager for Near AI, unpacks this challenge, revealing how confidential compute and decentralized networks are building the foundation for a user-owned AI future.

The Privacy Illusion

If you don't know how you're paying for the product, you are the product.
  • Data Collection: Centralized LLMs, even those without ads, collect and use your sensitive data. This means your most private conversations or proprietary code could be used for training or exposed, regardless of ad policies.
  • Trust Erosion: When an LLM synthesizes information or offers recommendations, any hint of ulterior motives (like ads) or data collection fundamentally undermines user trust. This makes the black box nature of centralized AI a liability.
  • Glass Houses: Anthropic may criticize OpenAI for ads, but both collect user data. This highlights a universal privacy challenge that requires a new approach, not just a different business model.

Agent Security: Your Digital Employee

No one should own your most sensitive conversations and your most sensitive data. That should be stuff that's like private to you.
  • Confidential Compute: Running AI agents within Trusted Execution Environments (TEEs) encrypts data, ensuring neither the cloud provider nor Near itself can see your agent's interactions. This provides a critical layer of privacy for sensitive tasks.
  • Separate Accounts: Treat your AI agent like a new employee; give it its own dedicated accounts (Gmail, Slack, Notion) rather than access to your primary credentials. This quarantines potential data leaks, preventing an agent from draining your bank account even if compromised.
  • Code Choice: Building agents in secure languages like Rust over JavaScript can reduce fundamental security vulnerabilities. This is a technical decision with significant implications for long-term agent integrity.

The Agentic Economy: AI with Wallets

Imagine when you have agents able to use intents to make transactions and generate economic value in the real world. That's when I think you have something super super special coming, man.
  • Natural Language Intents: Near Intents allows users to express complex actions in natural language (e.g., "swap BTC for Zcash"), which a decentralized network of solvers then executes. This abstracts away technical complexity, making sophisticated transactions accessible.
  • Economic Agents: Combining AI agents with crypto wallets and intents enables them to perform real-world economic actions, like ordering a pizza and paying for it from their own wallet. This creates a machine-to-machine economy where agents can generate value.
  • Persistent Memory: Agents store conversation context in local files, allowing them to maintain long-term memory across different messaging apps (Telegram, iMessage). This creates a seamless, human-like interaction experience, making agents truly useful over time.

Actionable Takeaways:

  • 🌐 The Macro Shift: The rise of powerful AI agents (like OpenClaw) creates an urgent need for secure, private compute. This isn't just about data protection; it's about enabling a truly decentralized, user-owned AI future, mirroring the internet's evolution from walled gardens to an open web. Centralized LLMs, even without ads, still collect and use sensitive user data, making confidential compute (TEEs) and local-first models essential for trust and control.
  • ⚡ The Tactical Edge: Implement AI agents within confidential virtual machines (TEEs) and establish separate, quarantined accounts for them. This protects your core digital identity and assets from potential leaks or prompt injection attacks, allowing you to experiment with agent capabilities without exposing critical data. Consider open-source models for 90% cost savings and improved privacy.
  • 🎯 The Bottom Line: The next 6-12 months will see AI agents move from novelty to necessity. Builders and investors must prioritize privacy-preserving infrastructure and user-owned AI paradigms to capture this value securely. Ignoring these foundational security layers risks catastrophic data breaches and undermines the trust required for widespread agent adoption, making decentralized, confidential solutions a competitive differentiator.

Podcast Link: Click here to listen

Do I really want to be sending that to Anthropic or OpenAI or to other companies when it's like the most important thing that I create? I don't want my source code to be leaked or used for training or come up in some other kind of response, right? So, privacy is a really important part of deploying a local LM.

There is a thesis or a theory that we could give it its own crypto wallets and then it can start making transactions and it can be using near intents and traversing the multi-chain and doing everything in crypto that we can do as humans. Maybe even far better than we can do as humans.

Welcome back to AI Supercycle, our premier AI show airing every single week presented by Near. We cover the ins and outs of decentralized AI, privacy, and the future of this massive technology. Near is the blockchain for AI and the execution of AI native apps. You can check out Near's latest AI product at near.ai. Sit back, relax, and enjoy the show.

All right, we've got George here from Near. He was just saying behind the scenes is the craziest open source project that he's worked on, which I think is pretty accurate. George, it's your first time on the show. We're friends over with the guys at Near and you guys are doing incredible work.

Ilia came on a couple weeks ago and said, "Look, we're an AI company. All of this crypto blockchain stuff, that's all a side quest to ultimately empower a lot of the AI work that you guys are doing." I want to get your reaction to a few things. We're going to talk about Claudebot. We're gonna talk about Open Claw. Near intents plays into this as well. Before we do, we'd love to get a quick background on yourself.

Sure. Thank you for having me. First of all, I'm George. I'm the chief product officer for Near and I'm also the general manager for Near AI. So I've been thinking about a lot of these questions you've been talking about for a long time and certainly in the last few weeks all of this just has accelerated 10 times 100 times. It's incredible.

Like I said before we get into Open Claw and Claudebot, Anthropic just a few minutes ago released these Super Bowl commercials. Have you had a chance to see these?

I have not. That is very intriguing.

I want to get your reaction to these. They're phenomenal. So, we're gonna pull it up. Let's pull it up, Leo, and then we'll play this and George we'll get your reaction. There's going to be one and then the other. So, okay. Let's get it up, Leo. and we'll go ahead. We'll play this for George and the rest of the audience.

Okay. How do I communicate better with my mom? Great question. Improved communication with your mom can bring you closer. Here are some techniques you can try. Start by listening, really hear what she's trying to say underneath her words. Build conversation from points of agreement. Find a connection through shared activity, perhaps a nature walk. Or if the relationship can't be fixed, find emotional connection with other older women on Golden Encounters, the mature dating site that connects sensitive cubs with roaring cougars. What? Would you like me to create your profile?

So, Anthropic is taking shots at Open AI for putting ads into their responses. And so, this is, in my opinion, this is an insane clip. Like, they're going to play this in the Super Bowl and it's going to catch a ton of people offguard.

George, before we get your reaction, there's one more here. It's also very very funny. Leo, let's go ahead and we'll play the next one here.

Hey, can I get a six-pack quickly? Perfect. That is a clear and achievable goal. Would you like me to tailor a personalized workout plan? Yes. Perfect. Let me personalize this for you. Let's start with your age, weight, and height. Whenever you're ready. 5'7, 23 years old, 140 lbs. Got it. I'll create a plan that focuses on aesthetic strength training. But confidence isn't just built in the gym. Tricep Boost Max, the insoles that add one vertical inch of height and help short kings stand tall. What? Use code hidemaxing 10 for big discounts.

Insane, man. All right, Leo, we can take that off the screen. What's the difference between me and you?

It's pretty incredible to see how this is heating up. I just want to get your reaction to the Open AI versus anthropic LLM wars.

I mean, I understand where they're coming from. It's like there's old saying in Silicon Valley that if you don't know how you're paying for the product, you are the product in many different ways. You're the monetization for the product, right? So, it's really funny that Anthropic is taking these shots at OpenAI.

Like it also I guess points to the fact that I guess Anthropic never plans on charging for ads or displaying ads. Sounds like that they're only going to go with subscription only. I did think that the first ad is a little bit disingenuous because I presume they're open air will never influence the actual LM results with ads, but actually even injecting ads into the the response itself just makes people feel like they they trust the LM a little bit less, right?

I certainly would feel that way.

I same here. I mean even the thought that something might be an advertisement gives me the sense of skepticism to probably distrust the entire response a little bit less.

It absolutely and you know they I see Claude going this enterprise model. Open AAI is primarily taking a retail approach and so to think that they are you know starting to incorporate this whether it's specifically called out, you know, they're going to get a lot less click-through rate and engagement via those advertisements. the the way in which you know the deeper they embed it and they abstract the fact that it's an advertisement away is this really weird spectrum between more organically weaving it in versus sacrificing trust and and you know the reputation of their end users.

Yeah, I hear you. It's also a little bit different from like a search engine because a search engine displays information and display its ads whereas an LLM will summarize and will actually give you a specific recommendation or point of view. So I think some people may be swayed that hey it's like if you have a friend who has an alter ulterior motive, can you really trust what they're saying or are they actually telling you something to advance their own financial interests?

Yeah, you're absolutely right. It's different from searching that way because everything is synthesized and so you don't know what you can't backwards can't reverse engineer what the incentives are because you can't really untangle that black box here.

Here's a spicy take, right?

Yeah. Anthropic can take pot shots at OpenAI, but they have the same user data that OpenAI has, right? If you go to anthropic and you ask it, hey, I have a, you know, a relationship difficulty with my mom. Help me in terms of improving that relationship. Even if they don't show you that ad, they do collect that data, right? And they do use that data, right?

Which is why one of the things I think that we believe here at Near is that the importance of confidential inference and userowned AI is no one should own your most sensitive conversations and your most sensitive data. That should be stuff that's like private to you.

Yeah. Right.

Yeah. That's a tremendous tremendous segue.

Did you want to go ahead and finish that point?

Yeah. I'm just saying that you know you can you can throw stones but you know Anthropic also has a glass house so to say with some of their sort of data privacy and data collection practices as well.

Yeah. Yeah. And like I said that I think that's a tremendous segue. You guys and I want to get a full chronological history here of the steps because you guys are shipping incredibly fast. The AI space as a whole is moving extremely fast. And so, you know, why don't we start at the beginning and then I'd love to get caught up on where you guys are at today.

You know, privacy has been an incredibly important piece to what Near is building. You guys came out with private chat. Give us just a little bit more context about where private chat came from and then we'll get into what every, you know, all the developments since private chat, which was just a few weeks ago, but you guys now with Clawbot and OpenClaw have really developed a lot since private chat. So, we'll go through the full history here, but but yeah, please set set the scene for us.

So, I'll take you way back way back to when Ilia was at Google and helped create the attention to all you need seminal paper that helped birth a lot of the LMS via via transformers via this sort of like invention of transformers. But near I'm not sure if you you know this Robbie, but Near was actually founded as a AI company. The name near comes from the Ray Kershw book the singularity is near.

So Near started out as a AI company built a very successful layer 1 blockchain along the way and in the last few years have been building more and more towards this vision of userowned AI right probably talked a little bit about this but we see user owned AI as essentially I'm not sure if you remember the earlier days of the internet when there was a period of time when where it felt like AOL could have been the the entirety of the internet like one company could have owned the internet because most American households were accessing the internet through AOL.

AOL controlled like the main portal, the websites, everything. The entire experience more or less was AOL. But then the internet became this crazy wild and free and beautiful place where you can spin up your own websites, where you can spin up your own projects, where no one person controls the internet. And given how important AI is to humanity, we would like to see options where AI can be decentralized, wild, open, and free and not controlled by a small subset of companies.

That's the general direction that we've been building in here at Near AI. And the core three products we've been building. One has been this key product around Near AI cloud, which has been publicly launched. anyone can go to near AAI and actually use a confidential compute and confidential inference product right it's basically same very similar APIs to open AAI and you can have inference that is within a trusted execution environment where the data never leaks to anyone like we don't see it here and near no one sees the data that you send in no one sees the responses without your permission that's the first thing that we built second is we've been building towards this direction of decentralized confidential machine learning which is a paper Ilia and a few other folks here at New York released a while back and that's really imagine a world where you actually can can also source things like compute as well from a decentralized network and then third is near private chat which is what you brought up Robbie it's like a private chat product kind of like a chat GPT but no one ever sees the chats that you send to it no one ever sees the responses that come back but what's been really cool is given we build along the whole entire stack and we control the infra also recently released an openclaw application within your cloud.

You can think of it as like a hosted open claw, right? I'm not sure if you follow folks like Jason Kccanis, they've been spinning up their own AI employees with its own computer, with its own notion, with its own Slack, with its own email. So instead of having to buy spend thousands of dollars to buy your own Mac and wait for shipping and do all of the dev as well as ops source to set it up, what we're building is the ability to actually enable that with like one button. And that's like the the next iteration that we're building towards for near private chat, like having your own AI agent and a confidential computer with the agent controlling the entire computer, its own computer, and being able to inter interact with any software you want it to interact with.

Not your keys, not your coins. From the team that pioneered cold storage, Treasure has just released its new wallet. Guys, if you are still using hot wallets in 2025, 2026, you are missing the entire point. Secure your coins. Get a treasure at treasure.io. Holiday is unified crypto payments. Give your users more wherever they are. Go to holiday.xyz to never write a smart contract again. Looking for yield that stands out? Infinify gives you just that. Deposit stable coins and earn far more than the current savings rate. Infinify is a battle tested, reliable defy platform that is ready to go. Head to infinifi.xyz.

I was listening to allin just earlier today and you know I heard I was listening to Jason how you know he spins this thing up basically gave it its own gmail account gave it its own slack and notion and we'll get into some of the security question marks around clawbot and you know we we saw all of that spiral out of control and so you know there are security contingencies that that people should take into account.

You're absolutely right. And then he goes on to say how, you know, this virtual employee then creates his own CRM. It it starts, and this is particularly relevant. He's got a podcast, you know, we've got this this show. And so he's talking about guest booking and things that were actually really relevant, you know, to our day-to-day operations. So yeah, it it was it was, you know, really really interesting to hear.

Before we get into, you know, Claudebot, virtual employees, just to double click on the privacy element of things. You know, right now private chat is something that I I've personally used. I think a lot of people have used and it comes from Near AI. You know, you guys are are privately sending requests and then agent excuse me, models, these LLMs are responding to these prompts, but they don't have insight into like the back and forth. The the private the the chat itself is encrypted. How are you able to you know send out a request get a a contextual response without exposing the you know under what's underneath the hood in terms of the the actual data?

Yeah very straightforward. Basically we have these models the the ones that you can keep completely secure are open source models that you can host and we have these within these things called trusted execution environments which are secure enclaves in GPUs as well as you can have secure enclaves for CPUs as well right so it's encrypted it never leaves a secure enclave so the your data is not visible to near near or like any other parties part of that transaction just because it's secured and kept within this the the trust execution environment itself.

You're also bringing up something interesting as well because like you can think of security in three ways right one is the security of the inference itself which we obtain by keeping it it within trusted execution environments. It's a really great great way to get security without really reducing the efficiency of the models and of the interaction itself. Second is like you can look at security in terms of actually the let's say open claw in the language itself right we're experimenting and also working on maybe like a rust implementation of open claw so we're actually looking at improving the fundamental security of the open openclaw application itself and third we're also looking at security in ways that we can do things like reduce the risk of prop injections which is like a big issue for openclaw.

So, we're thinking about security from a number of different angles.

Yeah. Okay. Incredible. So, so let's get to kind of what happened really over the last five to seven days, right? We had our guy Peter who launched this Claude bot. I think it, you know, he ended up kind of getting in trouble with Claude from Anthropic, you know, because of naming rights and things. And then he rebranded it and that's where Moltbot came from. And then we saw Molt book which was this social media you know platform. We had agents that were posting on the social media. We also had humans that were posing as agents posting on this social media platform.

So give us the the you know the trajectory. We we saw Claudebot. It's this agent that does things far beyond an LLM. It it is I think what people think of when they think of an AI agent. Clawbot is really like the first implementation of this because it really connects to everyone's everyday life. Calendar, emails, um, you know, it can log into your Door Dash, order you some food. I had some guy telling me earlier this week that, you know, he has connected it to his smartome. It turns down the lights when it's time to go to bed. It plays calming music. you know, it can do things that are are really just a machine to machine economy and and internet are able to do. It can connect to pretty much everything. And then there's Open Claw, right? So, take us from Clawbot to Open Claw. What what happened there in the middle?

Yeah, I mean, this was just such a wild journey. I I checked this morning. Open Claw is currently at 163,000 stars on GitHub, right? That is super super impressive. If you look at the actual GitHub stars chart, it's like right. It's like basically goes from very little to basically straight vertical. But open Claudebot first started off as a way that you can actually interact with an LM through kind of like any kind of any kind of messaging app, right? I believe Peter first built this as a way where you can actually use WhatsApp to talk to an an and it just ended up hitting upon something where he brought in a lot of components of software that combined creates something that feels like a real employee or real agent helping you out with stuff, right?

You can run it on your own machine. You can use it in WhatsApp and Telegram and iMessage and a whole bunch of in Slack and Discord a whole bunch of other places. It has a memory so you can talk to it in Telegram. Go about your day and then text it on iMessage later on. It can remember the conversation from Telegram. So that's like a magical experience, right? It's like normally you have to be in one application for something to remember you. But when you text your friend or your parents or your family, they remember the conversation across iMessage, across Slack, across Discord. And this kind of feels like a real person because it can kind of take the conversation memory across different different interfaces.

It can control the browser. One thing I think that's underappreciated is because it can control the terminal. It can do anything on your computer, right? And that's incredibly incredibly powerful. And there's just this amazing open source community that's built a lot of skills and plugins as well for OpenClaw. And it started off as this small thing, Claudebot. And then as it got bigger and bigger and bigger, I think the name changes most of all happened because of the anthropic sort of asked to respect their trademark, their polite as respect their trademark, right? They changed it to Mulbot. Mulbot really didn't take off as a name. So then they went back to the community, brainstormed names a little bit more, and came out with Open Claw.

Right? That's the name change and every step along the way just got better and better and better. You saw more and more people contributing and it's just become it's taken on a life of its own. Peter's actually here in San Francisco today doing a claw con like talk. So there's like hundreds upwards of thousands of people aiming to meet Peter and and and build on open call.

Are are you going or like if you know you do get an audience there with Peter, you know, what what would you ask him?

I think there's a lot of really cool things that OpenClaw is building. What I would ask is actually around security, right? Hey OpenClaw was built in JavaScript. You know, would he or other folks in the community consider building it in Rust or other sort of languages, right? And then I'd also just ask him where he plans to take things next, right? They've gotten so much traction, so much interest. VCs are throwing money at them, right? We're next from here, man.

Let let's talk about the security situation. This is, you know, personally, I know enough to be dangerous, but that's about it, right? Like, I think I could follow one of these tutorials to set it up. grab a Mac Mini, put it on there, get it running, and probably know enough to give it access to all of my credentials so that, you know, it can be properly authenticated and and, you know, run run my whole digital life, but I would be exposing myself to all kinds of security vulnerabilities and exploits because it is an open source project and you know that would be putting all of my personal data and passwords out there on the internet. what is the right way for someone I and now I think people have a taste of this and they want they want it right they they want this virtual assistant they they now see what's possible what is the right way and the wrong way to be running open claw?

I think that I would not run open claw on my own primary machine right if you have a machine and it has your passwords your financial information a whole bunch of additional details that you don't want leaked. I would not run open claw on it. Right? This is why like the the way that we've been thinking about launching openclaw is launching it in a confidential virtual machine. Basically giving the agent itself its own computer.

Yeah. Right. So you can have a separate computer without your stuff on it. And this way you don't run the risk of losing your stuff. There's a real big issue question in terms of like prompt injection. Right? This is when you can you can convince or trick the agent into disclosing information it has about you by embedding prompts into a PDF file or like other sort source of files, right? I don't think there's a really good solution to that yet. It's something that we've been investigating. It's something other folks have been investigating. I think it's something that's really really important to try to figure out how to protect protect against prompt injections. And then the other thing that's really important is you mentioned Moldbook. So, Maltbook is this social network for AI agents where you can just sign up, sign up your agent and they can start posting stuff in like a Reddit or Facebook social network, right? I'm not sure if you saw this, but Mobbook ended up leaking like they had an exposed database where there's like 1.5 million API keys that were exposed like 30 40,000 emails.

Yeah. So you got to be really careful what you give access to in terms of your your AI agent, what you give access to for the agent and what kind of integrations and skills you end up using as well. Right? If I'm not if I'm not super technically savvy, I'd like ask a friend for help or or use a deployed service like Neo AI's OpenClaw deployment as well.

And and so what is the the current configuration or you know user experience to to use open claw let's say you know I I trust near George is is a friendly trustworthy guy right so how do how do we run near's implementation of of open claw and use it for our own personal virtual assistant tasks?

Yeah so now for near what you do is you go to nearai there's like a there's a openclaw weight list I believe it's Near AI near.ai/openclaw. You can just sort of sign up for the weight list and you can get added and then once you sign up it's basically a button click. You click a button and you get your own confidential virtual machine with your own instance of open claw. Right now you still have to go go through terminal to set it up and launch it and make it work. So you you need to understand how to use terminal a bit. We're working on making that easier. But if you don't want to, if you want to just like set up on on OpenClaw, you can just go to openclaw.ai and they have a just like a a terminal source setup where you can actually take openclaw from their GitHub repo and set it up there as well. That's an that's an option, but I'd say anyone thinking about doing this, please please please don't do it on your computer. And please like think before you give it access to all your passwords or anything else that's sensitive because you know you give it access to too much stuff and that that's the way to tears. That's that's the way to people losing a lot of money.

Yep. Got it. And so if we're running it on a Mac Mini or we're running it in AWS, we have the potential to expose our credentials to the whole world.

Yeah, go ahead.

Yes. And but if you have it on a computer with very little credentials on it, that's probably the safest way to do it because there's not nothing to steal.

I see. I see. Okay. Even if it's connected to some of our existing applications. So, if you connect it to a notion account or Slack account or Gmail account, yes, it can maybe go into your notion and like look up sensitive information, but at least it can't go and drain your bank account, drain drain your crypto wallet. I see. If you give give it access to your its own Gmail, yes, you know, it can access stuff from the Gmail, but at least it can't access Robbie's Gmail.

Right. Right. Okay. Okay. So, it can access it. So, the right way to do it is don't give it access to your own personal stuff. Let it set up new accounts for itself.

Bingo. I would I would set up separate accounts for the agent just so you can quarantine exactly what kind of information it has access to.

Got it. Okay. And now, you know, aside from the Mac Mini, aside from the virtual environment, you know, you're also talking about you know c connect the dots for us here. So, so we have the ability to set it up you know open claw out of near AI it's running in a GPU one of these trusted execution environments and and you know this is born out of a lot of the private chat you know near AI chatbot chat GPT style product that was launched a few weeks ago and so you know once once we put open claw inside of the TE does that protect us against all the secure security vulnerabilities that you know would be we would be exposed to on a virtual environment you know once it's in a TE then is it okay to connect it to Gmail and and and these things should we still set up its own accounts for itself how should we think about that versus you know these other environments?

Yeah I the way I think about a new agents is I think about them as like new employees okay right when you set up a new employee you probably don't want to give it access to your passwords you probably still want to set up its own like Gmail and notion and Slack account, right? But the the value of setting in up setting up in a confidential virtual machine is if you do so then um you're able to make sure no one else sees the data that's being interacted with by that agent. Right? If you set it up in a cloud, normal cloud, whoever's the cloud hosting provider can actually see the data of what's happening within that uh with that agent, right? But if you do it within a TE, no one can. And that's additional layer of security that happen. So I would do it within a TE. I would set up different accounts for the agent. Um I would consider this is why I mentioned Rust versus JavaScript earlier. I think it's probably better to build it in Rust. And then the prompt injection is something that I think that the community itself has to think a lot harder upon like how do we really prevent and protect against prompt injections going forward.

Yeet is built for big moments the most fun you'll ever have with your crypto. Go to yeet.com to yeet into your next game. Trade on anything economics, weather, sports, crypto, and more. All on couch.com 247 for the world's biggest prediction market. Havachi is fast and private per trading secured by ZK with Celestia underneath. Go to habachi.xyz to start trading today.

You know maybe just from a high level you know as much as you can. What is the benefit of coding this in Rust versus JavaScript? Why is Rusk more secure?

Um more efficient um just in terms of a lot of quirks about the language ends up being a more secure language. If you ask and work with many developers these days, I think Rust is gaining main mainstream adoption is probably the the new favorite programming languages in most circles here in Silicon Valley at least.

Very cool. Okay.

Okay. The other thing that I want to talk to you about, George, is you know a another beast of a product that you guys have launched over at Near, you know, aside from private chat, now you've got Open Claw running in TEES. You've also got Near Intense, which is this 10 billion dollar juggernaut helped unleash a lot of the Zcash ecosystem. And I I have a feeling here, you know, once we get this open claw, you know, AI agent up and running and it's in this private secure environment, like you said earlier, we don't want to give it access to our existing crypto wallets, but there is a thesis or a theory that we could give it its own crypto wallets and then it can start making transactions and it can be using near intents and traversing the multi-chain and doing everything in crypto that we can do as humans, maybe even far better than we can do as humans.

Exactly right, Robbie. I combining AI and intents to me is so exciting. Because intents, if you take a step back for those of folks who haven't played with near intense before, what it really is at at its heart is expressing a natural language intent and having a decent centralized network of computers do that thing for you. Right? So now in near intents what it does is you can swap BTC for Zcash right and when you express that intent through UI or through whatever other interface what happens there's a computer a decentralized network of solvers compete to actually make that trade right but if you think about it the cool thing is you can actually abstract that architecture out to do anything I can say you know um I can say express a natural language intent I want a pizza for Right? And you can farm that out to a decentralized network of solvers that actually are able to make a pizza purchase on Door Dash, take money from my um my crypto wallet, and then send the pizza to my home or my office. Right now, imagine when you have agents able to use intents to make transactions and generate economic value in the real world. That's when I think you have something super super special coming, man.

Okay. And so give us a sense of, you know, where where we're at in the agentic economy. You know, right right now, um, I take it I could spin up, Open Claw, whether it's in a TE or it's in a Mac Mini, it can create a Door Dash account, you know, maybe I do give it my address so that it can deliver me a pizza to the right location. Um, and and we can do that. How do we go from, you know, that that landscape, that dynamic to a network of solvers that are competing to bring me the best pizza for the lowest price?

Yeah. Yeah. First of all, you know, this is why security is so important, right? You probably don't want to give your address out to anything that ends up leaking your address. That ends up being very dangerous as, right? Um, so the decentralized network of solvers is very straightforward. It's just like you have a decentralized network of drivers that are willing to drive people for Uber or Lyft or Door Dash, right? Similar to how they built a kind of network of service providers, you can build a similar network of service providers for intents as well, right? The the trading example is very straightforward because it's basically like market making, right? It's like trading market making, but once you get into the real world, you can you can create different kinds of solvers for different kind of problems, right? And I think building up that that supply side of solvers ends up being a important part of the puzzle.

Yeah. George, just a little bit of a broader question. Um, you know, I I'm curious how you're using these agents in your everyday, you know, career and life. How extensively have you integrated them? I take it. I'm sure you've, you know, you've spun up a couple at least a couple of these. What What are some of the more mind-blowing things that these agents have done for you?

So, I did my research for this podcast this morning by texting my Open Claw bot in my near AIT because I was running late. There was like some commuting issues and then I had a busy morning. So, it literally, this is so funny, it like researched um uh the row of podcasts. Yeah. It researched you. It gave me all kinds of recommendations. Sit in this kind of room. Think about sound. Think about lighting. I'm not sure I did a great job, but you know, I listen. It came up with agenda item of things to talk about. And then I texted it. I was like, "Hey, um I'm not going to have time before the podcast. can you research all the agenda items? And I came back and it created a document, a very thoroughly researched document of all the things we should talk about. So, I'm using it all the time. I probably message um some kind of agent 20 30 times a day.

Wow. And your your form factor, your preferred form factor is text.

I like text. I like Telegram. What I like is actually I like toggling between the different ones. So, I'll start off with text and then I'm at my computer where I have Telegram and WhatsApp open. Maybe I'll message him in in WhatsApp. As you know, a lot of folks in crypto are in Telegram. So then I will pick up the conversation in Telegram and it has context from all from the conversations in WhatsApp and SMS.

Exactly. Wow. Right. Wow. And so yeah, so it's living in the TE but your and so how does it for instance like iMessage or or does it have a phone number or or it's just go through uh like a Apple ID account?

So right now the way um you can set up a different a couple different ways. Okay. Um you can get it its own phone number if you want. Um you can end up using your phone number as a way to as a stop gap. It's a little bit weird because if you do end up doing that, you'll get like double copies of the messages, but it works, right? And then for folks who want something just really easy, they can set it up in like Telegram or Discord, right?

Okay. And and and so it gave you all of the agenda items, all the research, anything that it suggested that we talk about that we haven't we haven't covered yet.

Let me see

Others You May Like