The Rollup
January 13, 2026

How Claude Code is Changing the World with Nick Emmons

The End of the SaaS Moat: How Claude Code and Agentic Context Graphs Rebuild the Internet

Author: The Rollup | Date: October 2023


This summary is for builders and investors watching the walls of traditional software defensibility crumble. It explains why the future of value lies in distribution and capital rather than proprietary code.

  • 💡 Why is the $100B SaaS industry facing an existential threat?
  • 💡 How do context graphs solve the "memory cliff" in LLMs?
  • 💡 What does a speculative, agent-first economy actually look like?

The SaaS Death Spiral

"90% of software businesses are now a castle without a moat."
  • Software Moats Vanish: Code creation is now trivial for anyone with a terminal. Defensibility must move to user distribution or capital accumulation.
  • Commodity Code Reality: Software is becoming like salt: essential but too cheap to be a standalone business. Traditional subscription models will likely fail as custom tools become free to build.
  • Labor Market Pivot: AI replaces the grunt work of engineering. Human value moves toward high-level directive setting and craftsmanship.

Relational Memory Supercharge

"Context graphs introduce relational importance amongst pieces of information."
  • Beyond Simple RAG: Standard vector searches lose the connection between data points. Context graphs allow AI to understand how a Slack message relates to a Jira ticket.
  • Solving Memory Cliffs: LLM quality usually drops as windows fill up. Relational data stores provide an interconnected brain that keeps performance high during long sessions.

The Agentic Economy

"Blockchain will be the tool that all AI uses to coordinate financially."
  • Per-Unit Pricing: Humans prefer subscriptions because of cognitive limits. Agents can handle granular, per-token payments, making the economy more efficient.
  • Speculative Bot Markets: AI can process auctions and price changes in real-time. Markets will become more speculative as bots trade on 15-minute intervals without human fatigue.

Actionable Takeaways

  • 🌐 The Macro Shift: The transition from human-centric interfaces to agent-first protocols. As agents become the primary users, the internet will be rebuilt around machine-readable data and crypto-native payment rails.
  • The Tactical Edge: Integrate Model Context Protocol (MCP) servers into your workflow immediately. Use parallel Claude instances to act as both programmer and reviewer to bypass context window degradation.
  • 🎯 The Bottom Line: Software is no longer a product: it is a utility. Over the next year, the winners will be those who control the data graphs and the distribution channels, not the ones writing the code.

Podcast Link: Click here to listen

I do think planning is a material element of how you produce something great from a vibe coding kind of session versus not.

How do you see this kind of X42/ this general agentic first internet developing with regards to micro payments?

Are there certain elements that you found, certain light bulb moments where you've said, "Oh, this is like a nature of how I prompt a particular agent or LLM."

Welcome back to AI Supercycle, our premier AI show airing every single week presented by Near. We cover the ins and outs of decentralized AI, privacy, and the future of this massive technology. Near is the blockchain for AI and the execution of AI native apps. You can check out Near's latest AI product at near.ai. Sit back, relax, and enjoy the show.

What's happening, guys? Good, good. How are you? Good, man. Welcome, welcome back. Thank you. Thank you. Yeah, thanks for having me. Happy New Year. Yeah, it's going to be a big year, I think. Yeah, it definitely is.

Claude Code has been all over my timeline. Before this happened, I felt like Open AI was so light years ahead of everyone else in terms of competition when it was just like if you look at general AI. I was like, okay, Sam Altman and Open AI, they're just dominating.

It feels like the tides are just turning hard right now with cloud code, browser-based agent workflows, with MCP, the ability for any dumb normie like me to just download a terminal, click a couple buttons, vibe code an app. It feels like the era of browser Agents is here.

Nick, from your perspective, what does this mean? How is this sentiment flip happening? Give us your POV here.

I agree. I think Cloud Coat's incredible. I think what Enthropic has done on sort of like the agentic coding side is really powerful obviously. It's sort of surprising that it kind of popped up out of nowhere because I know I personally a lot of the company a lot of people I know have been deeper on the cloud code stuff for a while.

I think maybe it has to do with the new year and it being this time of trends disseminating outside of their little bubbles sometimes, which I think is a trend of how different metas permeate different pockets of society.

I don't know if you've spent much time looking at the Ralph Wigum stuff. I feel like the Ralph Wigum stuff has Ralph Wigum's pretty big. Alpha is it's taken the Claude code ecosystem by storm. I think in a lot of ways over the past few weeks it's basically this tool or paradigm for just setting cloud code off on its own allowing it to just work through its own sort of autonomously generated loops to an end product.

Since you're using cloud code, you have to jump in a lot. You have to give it approvals to run certain commands. You have to kind of correct it. You have to manage context in a useful way. Ralph solves all of it.

Ralph is just this truly autonomous age and you just set it off with say like a product spec if you're building something or even just like non-engineering things theoretically, and it'll just work through to completion.

I think that these types of developments, like we can talk more about context graphs, have really entered the mainstream in the past couple of weeks which are this big development in LLM context setting. So I think a bunch of stuff is just kind of coming to a head at once and 2026 is going to be this year that we see a lot of the stuff that's being predicted over the past couple years of agentic tooling really becoming this fundamental force in our labor market and how we just all individually work day-to-day stuff like that.

Was it context maps that you mentioned that context graphs?

Context graphs because obviously I'm somewhat familiar with context windows and it's gotten a lot better. I remember before maybe a month or two ago I would say something and it, you know, chatbt started to build a profile around me and started to build the context that it had on my life and we have an organizational AI account and so it started to kind of combine these different context windows together.

What's the extrapolation from context window context graph? What's the significance?

There's sort of three layers here. There's the context window itself which is like basically the context that the LLM generates throughout your individual session with it so like if you're in a single session in a single chat it's building context it understands what you said previously in the chat it can recall that use it to inform its responses that's why it's nice to stay in a single chat at for as long as you can.

Sometimes when you have these fairly finite or time box things you're trying to do with an LLM, the problem with context windows alone is that obviously as the context grows in a single session in a single window, the AI's quality drops off significantly. It really falls off a cliff as more context fills that window.

AI does need context. AI needs memory. It needs to pull from potentially vast data sources like at a company level if you have a huge amount of data that it needs to be pulling from. if you're working in like a specific domain, things like that.

What's been a part of these kind of AI systems or these fully fledged AI systems for a while now is this concept of like rag memory or just like expanding the context window by creating these like vector databases of information that when the AI is when you're conversing with the LLM, it can go pull the relevant information and try to find the relevant information, incorporate it into its logic and it providing a response and then have a more informed response.

The problem with like non-graph-based systems here is it loses this kind of like relational importance in data. It's just kind of like it's an oversimplification, but it's kind of doing like a text search or a semantic search over like the relevant data as you're asking things.

That's not how data is oriented. That's not like the optimal way to traverse data. Like you know this building a company like there's a bunch of different things maybe you say something on Slack here and you use some CMS or some like product management project management tool over here you have two different accounts you need to connect those accounts you need some relational dependence between them you need to build uh a relation in in some sort of graph-based system between some Slack conversation to some set of of to-do list items over in your project management system whatever it is.

Context graphs have been a thing again for a little bit but I think they're just kind of entering the mainstream came from like a narrative perspective in they they expand on this like extended memory or rag context concept in introducing this like relational importance amongst pieces of information which like really does supercharge the ability for AI to understand domain specific problems.

It makes it meaningfully more effective especially as you start integrating AI into say like your company in terms of headcount things like that like companies sort of knowledge bases are massive and they grow quite quickly and they have an immense amount of relational importance in that data and context graphs enable you to kind of give this like interconnected brain or interconnected data source uh to to an LLM or to an AI to to make it again meaningfully more effective, meaningfully more performant in in the the types of things you're with it.

I think that's really sort of entered the mainstream in the past few weeks and added to this resurgence of conversation going on around LMS, agentic tooling, all that stuff.

So, could just like very practically and tangibly for those in our audience who are listening and want to you know they're using these LLM on a daily basis or weekly basis like what what is a like how do you take advantage of um this context graphing from a a basic LM user perspective?

So you go on chat GBT, you've got like so I for my use I've got like my I have like main chat like I have like a main thing that I always go back to and it's built a lot of context but but then yeah like it you know it it it'll say like this chat has ran out and you got to kind of go to the the other you got to make a new one and and the new one doesn't have context to the to the other one.

What is how can we practically take advantage of the improvements here in the context graph from just like a user perspective?

I'm sure there's some element of this going on in some of like the memory integration or the memory features that have come to like chat GPT in the UI. Claude has this now. I think Gemini probably has this at this point as well. Like there's probably some aspect of of like context graphing on the back end, not just simple rag to inform that that memory.

Like you're having a bunch of different chats with chat GPT and it it'll remember something from a chat maybe you had six months ago. It's a different chat session, etc., but it's pulling that in. There's probably some aspect of that going on.

I'm not super aware of the commercial tools that exist. I'm sure there are a number of like MCP servers now for example that allow you to establish your own sort of like data store that is managed in this kind of like graph-based architecture that you can simply just integrate into say like chatgpt or claude or cloud code any of these things that over time it'll build this this kind of memory and this data store that that embeds this kind of relational importance across data.

I don't know again I don't know the specific ones we've built some custom stuff internally that we use to to kind of do this stuff, but I'm sure there are a number of MCPs or like similar like structures of tools that do this now that people can integrate into like their LLM flows.

Claude Code coming back here is is making it, you know, what Pump Fund did to altcoins, Claude Code is doing to like iPhone apps, right? And or like applications software companies.

What do you think are the knock-on effects of the ability to make deploying an application, deploying a website with like really somewhat complex code and functionality so easy?

What is the commoditization effect? In competitive business there's this idea of like commoditize the compliment, right? So there's like you you commoditize what makes a someone else's business extremely valuable. Thus you basically make their business less valuable by commoditizing the compliment.

It's a bit more clear than that. I don't know if that's the best explanation, but what is like the commoditization outlook of whether it's programming or software development or application building like what is the second and third order consequences of making this so easy to launch applications?

I think the really obvious one and like a very material one is like SAS as an industry is in question. Like SAS has built an industry around like building software as a service obviously like giving people these tools that that like help bring efficiency into their day-to-day workflows.

When software creation is commoditized anyone can do it at scale like the SAS industry itself is facing this kind of existential threat in my opinion like I think SAS's life lifetime was 98 to to 25 you know it's like that that like I I think when anyone can build software tools for whatever they'd like trivially just by interacting with something like cloud code like what is the point of SAS anymore?

I think that's one thing I think like it poses is this like more systemic risk and risk used lightly here because I think there's a lot of benefits that come through this but it like brings this really systemic change to just like software companies in general or soft software businesses like anyone building software today for the most part 90 plus% of like any software driven business now is a castle without a moat anyone can can build a competitive business there is no real defensibility to software kind of like innovation alone and so network effects are going to move to and like defensibility is going to move to other sectors whether that's user distribution whe that's like capital accumulation if it's a capital kind of uh relatable business those types of things I think like we she we will see this like meaningful shift both in terms of the businesses that proliferate and that has downstream effects in terms of like the funding markets as well like like it it no longer is like a really profitable venture to invest solely in some new innovation in software because all software more or less is now just like a non-defensible business.

I think it has a lot of big impacts there. I think the one people the one thing people talk about a lot and there's there's like obvious truth to it, but I don't think to the degree people talk about it is like the labor market impact this has around like software engineering or software engineering related jobs like like cloud code can replace a lot of software engineers in theory.

I think it takes more I guess like craftsmanship or like directive setting still in in making these tools like truly autonomous for it to to have some like super major super sort of ubiquitous impact on labor markets. That will come. I just don't think that's that's here yet.

Labor markets are going to face this really like paradigm shifting like quasi existential risk as well. And like you get into stuff like UBI and other things to try to remedy this. Maybe you you theorize around like what labor markets look like in the future in the face of this.

I think it really does just flip a lot of these fund what we accepted as fundamental truths in society today on its head across all of these these kind of core functions. So we'll see a lot of a lot of changing and I think 20 26 and beyond like this stuff is is only going to keep accelerating at this exponential or super exponential rate.

Link: habachi.xyz

Link: treasure.io

Link: yeet.com

Definitely agree there. And like there are so many rabbit holes to get into as far as what are the potential outcomes, knock-on effects as far as like the out the the consequences of this technology.

I'm curious like how someone can can kind of take their destiny into their own hands. What makes someone what turns someone from a good vibe coder into a great vibe coder?

Are there certain elements that you found, certain light bulb moments where you've said, "Oh, this is like a nature of how I prompt a particular agent or LLM that that really gets it to do what I'm trying to do."

Let me just chime in here. Uh, ox designer put out this tweet. The formula for getting the most out of cloud code. I want parenthesis goal/outcome plus in quotes interview me thoroughly to extract ideas and intent plus ultrathink plus in parenthesis plan mode on yeah like all of those are great tips honestly.

I mean like it is true like you literally just type the word ultrathink into like your prompt in cloud code and it like expends more resources reasoning and leads to better results as a function of that like like building this Q&A pattern with uh with the AI is another good tool. I think all of those are right.

I think we're seeing even in like the early days of this stuff a like a material dynamic range between like the quality of output that's generated just based on how you use these tools. I think there's a lot of prompting stuff like you're talking about like there's a lot of smaller prompting things you can do.

I come back to the context thing because I think that is like a really meaningful piece of this. I think like intelligence is only so powerful. Intelligence in the form of like these these massive models is only so powerful on its own without like a robust memory or context system.

I think there's a lot you can do to like really ramp up the quality of output that you're getting from these systems with robust memory or context systems alongside them like context graphs, things like that. Just making sure it has um like all of like access to everything that it needs.

What I do a lot is I do a lot of like parallel stuff. So I'll spin up like 24 plus instances of cloud code alongside each other and I'll and I'll have like like I like and I I've seen this like pattern followed by others online. So that's sort of where the inspiration came from.

You have like one or a couple instances just identify like areas for improvement, bugs, feature ideas, etc. It enumerate those. it produce context and then you spin up a bunch of other instances of cloud code to then like tackle each of these things and then a mirroring or a parallel set of instances to then review those and create this kind of like pair programming or a programmer plus reviewer paradigm in in how you're approaching like cloud coding sessions stuff like that.

I think that goes a long way I think people like there's still I think people are doing this now a lot more but I think there's a lot of opportunity to use like sub aents as you're working in cloud code sessions because of the context window issue. Like if you're just talking to a single cloud code instance or within a single for example cloud code session and you're just having it do everything it runs out of context quickly.

Probably see this a lot where it's like 5% to autoco compact or things like this and then it compacts the context and that means it basically just creates like a summary of your work in that session to to then loses a bunch of quality losses a bunch of fidelity and then it creates a new session with that summary as the beginning whereas you can just trivially spin up tell cloud code spit do all of this break down your tasks tackle each one that can be done in parallel in a separate sub agent that it spins up which is essentially a different cloud session and then work through those.

So your master cloud session is using like a a a fraction of the context than it is if it was doing all of the work all of the sub agents were doing themselves.

There's a lot of stuff like that. I think like MCPs are like the other big big unlocking in these types of workflows. Like you can you can get an MCP for basically anything now and just make it this like seamlessly integrated tool that these tools are that these agentic tools like cloud cut are using as they're working through problems, stuff like that.

I think there's there's a discovery problem there just because of how many of them exist and people kind of trying to understand what they what different types of of like MCP tools they need for example. But if you stack your your like agentic tool of choice like cloud code in this instance with like it's it sort of like inventory of MCPs the the level of of output you're going to get is meaningfully higher obviously is just having access to this big tool set.

This MCP question, MCP discovery kind of question with regards to using these tools. This is the technology that powers the Chrome browser agentic workflows, right? Those go hand in hand in some maybe.

I actually I haven't played a lot with the the Chrome like browser agentic workflow. But you saw like Frank DGOD's whole viral post, right, about like the cancel your subscriptions like you drop the CSVs and then like it Claude goes into the browser for you and asks you questions about it and then we'll go and actually cancel them on your behalf.

I think what he's doing I could be wrong. I think he's using a specific MCP to do a lot of that and like like a browser use MCP which basically gives like your LLM access to just go use your browser like you would like sometimes like a headless version of the browser but yeah it's that same concept and that's happening now right this browser like because like right the whole thing was are we going to be able to book a flight with my agent you know what I mean am I going to be able to like tell claw code like I need a flight you know I need to cancel this flight or I need to you know move this light because you know a lot of entrepreneurs have EAS right and and they and they have these EAS or PAS they do all these things for them and it's like sometimes they're sleeping sometimes that you know whatever whatever and so it's just that concept seems like it's real now that like this whole give a give a task to an agent have it do something in the browser for you uh book a reservation book a flight change a meeting etc yeah that's right.

I think like the internet is being rapidly reconfigured to like agent first. I think like right now it's browser use like these there's tools for these agents to go use the browser similarly to how like we use them but increasingly like more and more of these tools are just being like rebuilt or or like repackaged to be agent first to be like are like the big thing here.

Let's like kind of loop in the blockchain crypto angle to this micro payments is going to be something that in an agent first internet would be is likely something to get like pretty excited about like you know this this idea of agentic microp payments for you know tipping for posts for getting behind pay walls for all these different things just even if it's just like in the agentic first economy with some sort of crypto rails the the the economy of micro payments is going to skyrocket how do you see this kind of X42 this general agentic first internet developing with regards to microp payments is is there any use cases that you're excited about anything that you've seen in your work with Allora kind like what is the future of the microp payment based internet if we're thinking about things not from a human first perspective but yeah kind of from this agent first perspective?

I think like what it does because like AI is so much more like capable and expressive than humans. What it does from like a pricing perspective is it makes microp payments like meaningfully more viable in my opinion. I think the reason we have such like simple pricing models for any good or service today is just like a cognitive bandwidth issue in humans.

We pay subscription costs, we pay these we like we preund accounts etc because it would impose too much like cognitive bandwidth or cognitive overhead on the the users up to now which is us. When you have such a like a capable technology that can process so much more information so much more effectively in such a smaller amount of time, you can impose much more like granular or nuanced pricing models onto things.

I think like what we'll see a lot of companies move to is just more like a per unit pricing model. We're not going to wrap things in these like subscription costs as much anymore. it's going to move towards this much more efficient economy around exchange because AI is just more capable and blockchains, online payment systems, things like this, they enable AI to participate in like the the actual like financial aspects of the economy and the financial aspects of society as a as a kind of like first class citizen as opposed to just being this this tool that lives on the side that helps us kind of make sense of things.

Blockchain will be just the tool that all AI uses to coordinate financially which is basically coordination and it like in general in the future and going forward I think and I think it has a lot of stuff to do like we can talk about like prediction markets stuff like that I I think like society just becomes much more speculative in nature as well like there's real-time prices driven by speculation when AI is the de facto user and not humans things like this like like I think it it produces this much more efficient economy in a bunch of different ways.

I was going to mention that, you know, I'd imagine that AI itself would be one of the first ones to flip this subscription model into a perunit basis because there's already a very core unit to these AI LLMs, which is a token. And so you could just have it, you know, the LLM would charge your payment method, fund your payment method, and charge it on a per token basis rather than you know whatever you know chatbt I think it's 20 bucks a month there's premium models etc but but so as far as you know what you're getting at in the latter part of your point Nick is that the the world becomes more speculative in nature it's not humans maybe humans do as a as a knockout effect but it you're saying bots and AI is doing the speculating is that right?

Exactly yeah yeah yeah like like we we have a speculative economy me to begin with and in that like if the price of of an apple goes up or down tomorrow more or less people are going to buy apples like there is there is like in fact just like an auction going on daily in our lives in those decisions AI is just so much more efficient that it will be able to do this in real time continuously we don't need these this like lumpy or discreet sort of like pricing update to like find the the like optimal intersection of the supply and demand curve as we do today, you know.

I feel like before we get like too philosophical, I feel like we're putting together almost like this like ultimate guide to using clawed code. We went through prompting. We went through some of the knockon effects. Is there anything else that you think is extremely important or significant for someone who is breaking into vibe coding or trying to improve their skills?

What are misconceptions or anything that is flying under the radar? anything that is, you know, particularly significant for someone to keep in mind?

I think that's a good question. I think there's a lot of small tools like one thing that I think is like underused today and I I think maybe it's it's a function of like a lot of like the new people now getting into coding through vibe coding just like weren't engineers traditionally and so they like didn't have this sort of lexicon of tools that that have been being used in engineering for a while and it's very simple things like like using GitHub as or git as this uh like coordination layer amongst your like cloud code in your cloud coding setup like like git is built to coordinate amongst software engineers.

It's built to like itemize different things that you're working on create like a sensible set of of like version controls about the different features or bugs or whatever you're you're kind of tackling. And so I I think it's it's like a good exercise for people to like practice bringing git workflows into their agentic coding setups. spinning up like like uh different like work trees and things like this to to like break cloud code sessions or whatever coding tool they use. I think cloud code is the best though like break different like cloud code sessions into these these like different sort of siloed instances that they can work on in parallel and then just like fix conflicts at times of merging things like that.

I think that's a powerful thing people can do. I think it is useful to like approach even vibe coding in this kind of multimodel setup. Like I think some models are useful for like constructing initial like product specs that you're trying for a product you're trying to build.

Some models are better at like aentic tooling itself obviously like like claude opus 45 at the like right now I think is is the best at at coding. Like I do think pulling in multiple models like Gemini has has like a really recent cutoff window and so you can pull like like more informed uh like real-time information to inform like like tooling use and like what are the optimal like tools or or frameworks or software libraries to use pull in chat CPT for like some deeper research stuff like I I do think approaching vibe coding as this more holistic multimodel problem or exercise is just a generally useful way to approach it as a as as opposed to just like typing cloud in your terminal and and start like just just jumping into coding right away.

I do think planning is a material like element of of how you produce something great from a vibe coding kind of session versus not. So yeah, I think planning is important. Like I said, I think context is super important.

I think like if if not using git then establishing like at least just like a simple like file system within any sort of like repo you're using or cloud session you're using to uh like for for cloud code to like track what it's been doing and to to stay up to date across different sessions to to devise a robust plan and and to update it in real time as as it hits snags or as it comes up with different ideas or as you come up with different ideas is a simple way to spin up like a task MD file and a spec.md file and include that alongside your cloud.MD file and and like tackle your session that way and have it manage things there as it's going through its its its workflow.

There's a bunch of of I I think little things like that that go along the way. think using like plugins in cloud code is really useful if if like you have like if you're going through a nor like a a regular uh uh and commands like if you're going through like a regular flow of like fixing a bug or or having it find bugs spin up like a find bugs uh uh command spin up a a bug fix command that just has a like a prompt and sort of a setup already established so that it can just tackle that in a much more like codified way.

I think a lot of this stuff is stuff that people are starting to use more and more in these like agenting like coding setups but it it does have a meaningful impact on on the type of performance you can get from these things.

Link: ki.com

Link: holiday.xyz

Link: infinifi.xyz

Man, I appreciate that. I think anyone who's looking for a primer on on, you know, just getting the intermediate level of cloud coding will will be able to use a lot of these things. I think the 24 sessions that you mentioned is is insane.

Kind of zooming out of cloud code and getting to just like the use cases of AI holistically. It it seems like programming is like the the big use case. Like I I know there's obviously the chat and the there's a ton of other things, but it seems like like Claude like we started this kind of I was saying the sh this sentiment shift where like you know Claude was advertising on like NBA stadiums and like advertising on TV but people kind of all knew it wasn't as good as Open AI and it was kind of like nah of course Chach is better. 5.0's 5.1's better. Now it's like, oh wait, was is is this this programming part of the equation so much more valuable?

I guess to both of you guys, Nick, we'll start with you and Rob curious to your perspective as well. Like is programming like the the big thing for AI to to to like figure out like is that the 100x for AI companies or or is there like obviously all these other kind of like verticals that are important, but like is programming the 100x here? And like if so, you know, it it seems like Claude's winning.

I think it's like a heavy somewhat philosophical question to tackle. Like like I I do think it it like reshapes all of society. Like AI is just like this this kind of new paradigm of compute that will replace most of like the the like predecessor forms of compute that society runs on today.

I think when you think about like what has the the like most leverage in terms of of input to output coding is is like has one of the the like largest like leverage factors for lack of a better term in that like if you write some some like post or some manifesto online the level of impact it can have is maybe maybe a like a couple orders of magnitude maybe it's 5 10x whatever it is when you like build a piece of software it has the the potential to like affect hundred thousands x of of value, you know, like we've seen this in in the software industry over the past 20 years in general, right? Like like when someone builds a piece of code, it's very little capital, very little resources that go in and massive amounts of capital that's created as a function of it. You know, massive amounts of value that's that's created as a function of it.

I think just like practically speaking AI being pointed at this problem of software engineering is this super high leverage vertical to focus on and yeah in that regard I think for in a lot of ways claude is is winning in that. I think they made the right strategic decision to to like focus in more on that as opposed to being ultra general and and kind of like maneuvering the idea maze more slowly as maybe some of the other companies did. and and I I I think as a function of that their lead does have some like merit to it and some defensibility to it I guess rather.

So yeah.

Nick, I really like the idea of leverage factors as far as like where to allocate time, resources, compute, you know, where where can we target and aim AI at? And then, you know, what which ones of those areas are going to have the most impact on society and most value created because it's probably not words. And we've already seen, you know, AI slop on social media and whatnot.

I think you know coding and software creation has an incredibly high leverage factor. I also think that like like deep math and scientific discoveries is going to be a little bit later to get off the ground. But just the sheer amount of research that these things can do and analysis that they can do by going back and probably and they can survey data. they can look through academic papers and they're already starting to make discoveries in some of the, you know, toughest math problems that have gone undiscovered.

They're starting to crack these things and I do think it takes longer for those to reach the market, but there are significant multiplying effects as far as what source of value those those discoveries can lead to. I do think it's primarily theoretical in terms of math, science, these different areas of focus, but I would put those areas up there with coding in terms of leverage factors as far as what kind of value can be created for society after those discoveries are made. Just takes longer to come to market.

Yeah, I agree with that for sure.

Nick Emmens, have a great 2026, man. Hope to see you in New York. Thanks for coming on today. Absolutely fun session. AI super cycle. It's cloud coding year. Apparently it's it's vibe coding year. It's it's the year of vibe coding.

Nick, before you take off, give us like just a quick update on Allora. Like where are you guys at and how are things going?

I mean like we're like the network's live now. We have models coming onto the network. Allora for context is like this kind of model aggregation layer to like pull together a bunch of different models enable them to like learn off of one another and like collectively solve AI problems in this kind of like multimodel format.

We're seeing a lot of integration and a lot of stuff starting to go live in the in this Q1 in the like AIXD sector and especially like we're seeing a lot of stuff in prediction markets. Like there's a number of of like quite successful Agents that are for example trading on Poly Market leveraging like collective price predictions from a bunch of different models on Allora to to trade these like 15-minute one-hour long binary options markets that like there there's work being done around sports betting markets that'll come live later in Q1. start to expand into other types of markets.

There's a lot of stuff being done around like AI powered strategies or Agents on perexes, these money markets in DeFi, a lot of stuff like that because I I think like

Others You May Like