The Rollup
January 13, 2026

How Claude Code is Changing the World with Nick Emmons

How Claude Code Vaporizes the SaaS Moat

By The Rollup

Date: October 2023

Quick Insight: This summary is for builders and investors realizing that "vibe coding" is the end of the traditional software business model. Learn how to use context graphs and agentic workflows to build in a world where code is no longer a defensible asset.

  • 💡 Why is the traditional SaaS subscription model facing an existential threat?
  • 💡 How do context graphs prevent AI from losing quality during long sessions?
  • 💡 What specific workflows turn a casual prompter into a high-output agent director?

Top 3 Ideas

🏗️ The Death of SaaS

SaaS's lifetime was 98 to 25.

  • Commoditized Code Production: Software creation is becoming trivial for anyone with a terminal. The business of selling access to basic tools is no longer a viable long-term strategy.
  • Moat Migration: Defensibility is moving away from code toward user distribution and capital accumulation. If anyone can clone your feature set in an afternoon, your only protection is the network you own.
  • Zero Marginal Cost: AI reduces the cost of building complex functionality to near zero. Investors must stop funding software innovation and start funding distribution dominance.

🏗️ The Context Graph

AI does need context. AI needs memory.

  • Relational Data Importance: Standard RAG systems treat data like a flat text search. Context graphs maintain the connections between your Slack pings and your project tasks, making agents significantly more performant.
  • The Memory Cliff: LLM quality drops sharply as context windows fill up. Using graph-based architectures allows agents to pull relevant memories without drowning in the noise of a single long session.

🏗️ Agentic Workflows

The internet is being rapidly reconfigured to agent first.

  • Parallel Agent Architectures: High-level builders use one agent to identify bugs and twenty others to fix them simultaneously. This programmer plus reviewer model creates a self-correcting loop that humans cannot match.
  • Agentic Micro-payments: Subscriptions are a human bandwidth solution, not an economic one. Agents will prefer per-unit pricing via crypto rails, paying for exactly the tokens or compute they consume.

Actionable Takeaways

  • 🌐 The Macro Transition: The migration from human-centric interfaces to agent-first protocols where software is a temporary utility rather than a permanent product.
  • The Tactical Edge: Use Git and MCP servers to give your agents a persistent memory and toolset, allowing them to work autonomously through complex loops.
  • 🎯 The Bottom Line: Software is no longer the prize; it is the commodity. Your value in the next year depends on how well you direct the agents that build it.

Podcast Link: Click here to listen

I do think planning is a material element of how you produce something great from a vibe coding kind of session versus not.

How do you see this kind of X42/ this general agentic first internet developing with regards to micro payments?

Are there certain elements that you found, certain light bulb moments where you've said, "Oh, this is like a nature of how I prompt a particular agent or or LLM."

Welcome back to AI Supercycle, our premier AI show airing every single week presented by Near. We cover the ins and outs of decentralized AI, privacy, and the future of this massive technology. Near is the blockchain for AI and the execution of AI native apps. You can check out Near's latest AI product at near.ai. Sit back, relax, and enjoy the show.

What's happening, guys? Good, good. How are you? Good, man. Welcome, welcome back. Thank you. Yeah, thanks for having me. Happy New Year. Yeah, it's going to be a big year, I think. Yeah, it definitely is.

Guys, I must say Claude Code has been all over my timeline. Before this happened, I felt like Open AI was so light years ahead of everyone else in terms of competition when it was just like if you look at general AI. I was like, okay, Sam Altman and Open AI, they're just dominating.

It feels like the tides are just turning hard right now with Claude Code, browser-based agent workflows, with MCP, the ability for any dumb normie like me to just download a terminal, click a couple buttons, vibe code an app. It feels like the era of browser agents is here.

Nick, from your perspective, what does this mean? How is this sentiment flip happening? Kind of give us your POV here.

I mean I agree. I think Claude Code's incredible. I think what Anthropic has done on sort of like the agentic coding side is really powerful obviously. It's sort of surprising that it kind of popped up out of nowhere because I know I personally, a lot of the company, a lot of people I know have been deeper on the Claude Code stuff for a while.

I think maybe it has to do with the new year and it being this time of trends disseminating outside of their little bubbles sometimes, which I think is a trend of how different metas permeate different pockets of society. I don't know if you've spent much time looking at the Ralph Wigum stuff. I feel like the Ralph Wigum stuff has Ralph Wigum's pretty big.

Ralph Wigum's Alpha is taken the Claude Code ecosystem by storm. I think in a lot of ways over the past few weeks, it's basically this tool or paradigm for just setting Claude Code off on its own, allowing it to just work through its own sort of autonomously generated loops to an end product where I'm sure since you're using Claude Code, you have to jump in a lot.

You have to give it approvals to run certain commands. You have to kind of correct it. You have to manage context in a useful way. Ralph solves all of it. You know, Ralph is just this truly autonomous agent and you just set it off with say a product spec if you're building something or even just non-engineering things theoretically, and it'll just work through to completion.

I think that these types of developments, I think we can talk more about context graphs, I think have really entered the mainstream in the past couple of weeks, which are this big development in LLM context setting. So I think a bunch of stuff is just coming to a head at once and 2026 is going to be this year that we see a lot of the stuff that's being predicted over the past couple years of agentic tooling really becoming this fundamental force in our labor market and how we just all individually work day-to-day stuff like that.

Was it context maps that you mentioned that context graphs?

Context graphs, because obviously I'm somewhat familiar with context windows and it's gotten a lot better. I remember before maybe a month or two ago, I would say something and it, you know, chat GPT started to build a profile around me and started to build the context that it had on my life, and we have an organizational AI account and so it started to kind of combine these different context windows together.

So, what's the extrapolation from context window context graph? What's the significance?

There's sort of three layers here. There's the context window itself, which is basically the context that the LLM generates throughout your individual session with it. So if you're in a single session in a single chat, it's building context. It understands what you said previously in the chat. It can recall that, use it to inform its responses. That's why it's nice to stay in a single chat for as long as you can.

Sometimes when you have these fairly finite or time box things you're trying to do with an LLM, the problem with context windows alone is that obviously as the context grows in a single session in a single window, the AI's quality drops off significantly. It really falls off a cliff as more context fills that window.

AI does need context. AI needs memory. It needs to pull from potentially vast data sources like at a company level if you have a huge amount of data that it needs to be pulling from. If you're working in a specific domain, things like that.

What's been a part of these kind of AI systems or these fully fledged AI systems for a while now is this concept of rag memory or just expanding the context window by creating these vector databases of information that when the AI is, when you're conversing with the LLM, it can go pull the relevant information and try to find the relevant information, incorporate it into its logic and it providing a response and then have a more informed response.

The problem with non-graph-based systems here is it loses this kind of relational importance in data. It's just kind of like it's an oversimplification, but it's kind of doing like a text search or a semantic search over the relevant data as you're asking things. And that's not how data is oriented. That's not the optimal way to traverse data.

You know, building a company, there's a bunch of different things. Maybe you say something on Slack here and you use some CMS or some product management project management tool over here. You have two different accounts. You need to connect those accounts. You need some relational dependence between them. You need to build a relation in some sort of graph-based system between some Slack conversation to some set of to-do list items over in your project management system, whatever it is.

Context graphs have been a thing again for a little bit, but I think they're just kind of entering the mainstream, came from a narrative perspective in they expand on this extended memory or rag context concept in introducing this relational importance amongst pieces of information, which really does supercharge the ability for AI to understand domain specific problems.

It makes it meaningfully more effective, especially as you start integrating AI into say your company in terms of headcount, things like that. Companies sort of knowledge bases are massive and they grow quite quickly and they have an immense amount of relational importance in that data and context graphs enable you to kind of give this interconnected brain or interconnected data source to an LLM or to an AI to make it again meaningfully more effective, meaningfully more performant in the types of things you're with it.

I think that's really sort of entered the mainstream in the past few weeks and added to this resurgence of conversation going on around LLMs, agentic tooling, all that stuff.

So, could just like very practically and tangibly for those in our audience who are listening and want to, you know, they're using these LLM on a daily basis or weekly basis, what is a how do you take advantage of this context graphing from a basic LM user perspective?

So you go on chat GPT, you've got like so I for my use I've got like my I have like a main thing that I always go back to and it's built a lot of context but but then yeah like it you know it it it'll say like this chat has ran out and you got to kind of go to the the other you got to make a new one and and the new one doesn't have context to the to the other one.

So, what is how can we practically take advantage of the improvements here in the context graph from just like a user perspective?

I'm sure there's some element of this going on in some of the memory integration or the memory features that have come to chat GPT in the UI. Claude has this now. I think Gemini probably has this at this point as well. Like there's probably some aspect of context graphing on the back end, not just simple rag to inform that memory.

Like you're having a bunch of different chats with chat GPT and it'll remember something from a chat maybe you had six months ago. It's a different chat session, etc., but it's pulling that in. There's probably some aspect of that going on.

I'm not super aware of the commercial tools that exist. I'm sure there are a number of MCP servers now for example that allow you to establish your own sort of data store that is managed in this kind of graph-based architecture that you can simply just integrate into say chatgpt or claude or Claude Code, any of these things that over time it'll build this kind of memory and this data store that embeds this kind of relational importance across data.

I don't know again I don't know the specific ones we've built some custom stuff internally that we use to do this stuff, but I'm sure there are a number of MCPs or similar like structures of tools that do this now that people can integrate into their LLM flows.

So, Claude Code coming back here is making it, you know, what Pump Fund did to altcoins, Claude Code is doing to like iPhone apps, right? And or like applications software companies.

What do you think are the knock-on effects of the ability to make deploying an application, deploying a website with really somewhat complex code and functionality so easy? What is the what is the commoditization effect?

In competitive business there's this idea of like commoditize the compliment, right? So there's like you commoditize what makes someone else's business extremely valuable. Thus you basically make their business less valuable by commoditizing the compliment.

It's a bit more clear than that. I don't know if that's the best explanation, but what is the commoditization outlook of whether it's programming or software development or application building like what is the second and third order consequences of making this so easy to launch applications?

I think the really obvious one and like a very material one is SAS as an industry is in question. Like SAS has built an industry around building software as a service obviously like giving people these tools that that help bring efficiency into their day-to-day workflows.

When software creation is commoditized, anyone can do it at scale, the SAS industry itself is facing this kind of existential threat in my opinion. I think SAS's lifetime was 98 to 25. I think when anyone can build software tools for whatever they'd like trivially just by interacting with something like Claude Code, what is the point of SAS anymore?

I think that's one thing. I think it poses this more systemic risk and risk used lightly here because I think there's a lot of benefits that come through this, but it brings this really systemic change to just like software companies in general or software businesses. Like anyone building software today for the most part, 90 plus% of any software driven business now is a castle without a moat. Anyone can build a competitive business. There is no real defensibility to software kind of like innovation alone.

Network effects are going to move to and defensibility is going to move to other sectors whether that's user distribution, whether that's like capital accumulation if it's a capital kind of relatable business, those types of things. I think we will see this meaningful shift both in terms of the businesses that proliferate and that has downstream effects in terms of the funding markets as well.

It no longer is a really profitable venture to invest solely in some new innovation in software because all software more or less is now just like a non-defensible business. I think it has a lot of big impacts there.

The one thing people talk about a lot and there's obvious truth to it, but I don't think to the degree people talk about it is the labor market impact this has around like software engineering or software engineering related jobs. Claude Code can replace a lot of software engineers in theory.

I think it takes more craftsmanship or directive setting still in making these tools truly autonomous for it to have some super major, super sort of ubiquitous impact on labor markets. That will come. I just don't think that's here yet.

Labor markets are going to face this really paradigm shifting quasi existential risk as well. And you get into stuff like UBI and other things to try to remedy this. Maybe you theorize around what labor markets look like in the future in the face of this.

I think it really does just flip a lot of these what we accepted as fundamental truths in society today on its head across all of these these core functions. So we'll see a lot of changing and I think 2026 and beyond this stuff is only going to keep accelerating at this exponential or super exponential rate.

Link: habachi.xyz

From the team that pioneered cold storage, Treasure has just released its new wallet. Guys, if you are still using hot wallets in 2025, 2026, you are missing the entire point. Secure your coins.

Link: treasure.io

Go to yeet.com to yeet into your next game.

Definitely agree there. And like there are so many rabbit holes to get into as far as what are the potential outcomes, knock-on effects as far as like the out the the consequences of this technology.

I'm curious like how someone can take their destiny into their own hands. What makes someone what turns someone from a good vibe coder into a great vibe coder?

Are there certain elements that you found, certain light bulb moments where you've said, "Oh, this is like a nature of how I prompt a particular agent or or LLM that that really gets it to do what I'm trying to do."

Let me just chime in here. Uh, ox designer put out this tweet. The formula for getting the most out of Claude Code. I want parenthesis goal/outcome plus in quotes interview me thoroughly to extract ideas and intent plus ultrathink plus in parenthesis plan mode on yeah like all of those are great tips honestly.

I mean it is true, like you literally just type the word ultrathink into your prompt in Claude Code and it expends more resources reasoning and leads to better results as a function of that. Building this Q&A pattern with the AI is another good tool. I think all of those are right.

I think we're seeing even in the early days of this stuff a material dynamic range between the quality of output that's generated just based on how you use these tools. I think there's a lot of prompting stuff like you're talking about like there's a lot of smaller prompting things you can do.

I come back to the context thing because I think that is a really meaningful piece of this. I think intelligence is only so powerful. Intelligence in the form of these massive models is only so powerful on its own without a robust memory or context system.

I think there's a lot you can do to really ramp up the quality of output that you're getting from these systems with robust memory or context systems alongside them like context graphs, things like that. Just making sure it has like access to everything that it needs.

What I do a lot is I do a lot of like parallel stuff. So I'll spin up like 24 plus instances of Claude Code alongside each other and I'll have like I like and I I've seen this pattern followed by others online. So that's sort of where the inspiration came from.

You have one or a couple instances just identify areas for improvement, bugs, feature ideas, etc. It enumerate those. It produce context and then you spin up a bunch of other instances of Claude Code to then tackle each of these things and then a mirroring or a parallel set of instances to then review those and create this kind of pair programming or a programmer plus reviewer paradigm in how you're approaching like cloud coding sessions stuff like that.

I think that goes a long way. I think people there's still I think people are doing this now a lot more, but I think there's a lot of opportunity to use sub agents as you're working in Claude Code sessions because of the context window issue.

If you're just talking to a single Claude Code instance or within a single for example Claude Code session and you're just having it do everything it runs out of context quickly. Probably see this a lot where it's like 5% to autoco compact or things like this and then it compacts the context and that means it basically just creates like a summary of your work in that session to then loses a bunch of quality losses a bunch of fidelity and then it creates a new session with that summary as the beginning whereas you can just trivially spin up tell Claude Code spit do all of this break down your tasks tackle each one that can be done in parallel in a separate sub agent that it spins up which is essentially a different cloud session and then work through those.

So your master cloud session is using a fraction of the context than it is if it was doing all of the work all of the sub agents were doing themselves. So there's a lot of stuff like that.

I think MCPs are the other big big unlocking in these types of workflows. You can get an MCP for basically anything now and just make it this seamlessly integrated tool that these tools are that these agentic tools like cloud cut are using as they're working through problems, stuff like that.

There's a discovery problem there just because of how many of them exist and people kind of trying to understand what they what different types of like MCP tools they need for example. But if you stack your your like agentic tool of choice like Claude Code in this instance with its sort of like inventory of MCPs the level of output you're going to get is meaningfully higher obviously is just having access to this big tool set.

This MCP question, MCP discovery kind of question with regards to using these tools. This is the technology that powers the Chrome browser agentic workflows, right? Those go hand in hand in some maybe.

I actually I haven't played a lot with the Chrome like browser agentic workflow. But you saw Frank DGOD's whole viral post, right, about like the cancel your subscriptions like you drop the CSVs and then like it Claude goes into the browser for you and asks you questions about it and then we'll go and actually cancel them on your behalf.

I think what he's doing I could be wrong. I think he's using a specific MCP to do a lot of that and like like a browser use MCP which basically gives like your LLM access to just go use your browser like you would like sometimes like a headless version of the browser but yeah it's that same concept and that's happening now right this browser like because like right the whole thing was are we going to be able to book a flight with my agent you know what I mean am I going to be able to like tell claw code like I need a flight you know I need to cancel this flight or I need to you know move this light because you know a lot of entrepreneurs have EAS right and and they and they have these EAS or PAS they do all these things for them and it's like sometimes they're sleeping sometimes that you know whatever whatever and so it's just that concept seems like it's real now that like this whole give a give a task to an agent have it do something in the browser for you book a reservation book a flight change a meeting etc yeah that's right.

I think like the internet is being rapidly reconfigured to like agent first. I think like right now it's browser use like these there's tools for these agents to go use the browser similarly to how like we use them but increasingly like more and more of these tools are just being rebuilt or or like repackaged to be agent first to be like are like the big thing here.

Let's loop in the blockchain crypto angle to this micro payments is going to be something that in an agent first internet would be is likely something to get like pretty excited about like you know this this idea of agentic microp payments for you know tipping for posts for getting behind pay walls for all these different things just even if it's just like in the agentic first economy with some sort of crypto rails the the the economy of micro payments is going to skyrocket how do you see this kind of X42 this general agentic first internet developing with regards to microp payments is is there any use cases that you're excited about anything that you've seen in your work with Allora kind like what is the future of the microp payment based internet if we're thinking about things not from a human first perspective but yeah kind of from this agent first perspective.

I think what it does because AI is so much more capable and expressive than humans. What it does from a pricing perspective is it makes microp payments meaningfully more viable in my opinion. I think the reason we have such simple pricing models for any good or service today is just a cognitive bandwidth issue in humans.

We pay subscription costs, we pay these we like we preund accounts etc because it would impose too much cognitive bandwidth or cognitive overhead on the users up to now which is us. When you have such a capable technology that can process so much more information so much more effectively in such a smaller amount of time, you can impose much more granular or nuanced pricing models onto things.

I think what we'll see a lot of companies move to is just more like a per unit pricing model. We're not going to wrap things in these subscription costs as much anymore. It's going to move towards this much more efficient economy around exchange because AI is just more capable and blockchains, online payment systems, things like this, they enable AI to participate in the actual like financial aspects of the economy and the financial aspects of society as a kind of first class citizen as opposed to just being this tool that lives on the side that helps us kind of make sense of things.

Blockchain will be just the tool that all AI uses to coordinate financially which is basically coordination and it in general in the future and going forward I think and I think it has a lot of stuff to do like we can talk about like prediction markets stuff like that I I think like society just becomes much more speculative in nature as well like there's real-time prices driven by speculation when AI is the de facto user and not humans things like this like like I think it produces this much more efficient economy in a bunch of different ways.

I was going to mention that, you know, I'd imagine that AI itself would be one of the first ones to flip this subscription model into a perunit basis because there's already a very core unit to these AI LLMs, which is a token. And so you could just have it, you know, the LLM would charge your payment method, fund your payment method, and charge it on a per token basis rather than you know whatever you know chatbt I think it's 20 bucks a month there's premium models etc but but so as far as you know what you're getting at in the latter part of your point Nick is that the the world becomes more speculative in nature it's not humans maybe humans do as a as a knockout effect but it you're saying bots and AI is doing the speculating is that right.

Exactly yeah yeah yeah like like we we have a speculative economy me to begin with and in that like if the price of of an apple goes up or down tomorrow more or less people are going to buy apples like there is there is like in fact just like an auction going on daily in our lives in those decisions AI is just so much more efficient that it will be able to do this in real time continuously we don't need these this like lumpy or discreet sort of like pricing update to like find the the like optimal intersection of the supply and demand curve as we do today, you know.

I feel like before we get too philosophical, I feel like we're putting together almost like this ultimate guide to using clawed code. We went through prompting. We went through some of the knockon effects. Is there anything else that you think is extremely important or significant for someone who is breaking into vibe coding or trying to improve their skills?

What are misconceptions or anything that is flying under the radar? Anything that is, you know, particularly significant for someone to keep in mind?

I think that's a good question. I think there's a lot of small tools like one thing that I think is underused today and I think maybe it's a function of a lot of the new people now getting into coding through vibe coding just weren't engineers traditionally and so they didn't have this sort of lexicon of tools that have been being used in engineering for a while and it's very simple things like using GitHub as or git as this coordination layer amongst your like Claude Code in your cloud coding setup like like git is built to coordinate amongst software engineers.

It's built to itemize different things that you're working on create like a sensible set of of like version controls about the different features or bugs or whatever you're kind of tackling. So I think it's a good exercise for people to practice bringing git workflows into their agentic coding setups.

Spinning up like different like work trees and things like this to break Claude Code sessions or whatever coding tool they use. I think Claude Code is the best though like break different like Claude Code sessions into these these like different siloed instances that they can work on in parallel and then just like fix conflicts at times of merging things like that.

I think that's a powerful thing people can do. I think it is useful to approach even vibe coding in this kind of multimodel setup. Like I think some models are useful for like constructing initial like product specs that you're trying for a product you're trying to build.

Some models are better at like agentic tooling itself obviously like like claude opus 45 at the right now I think is the best at coding. Like I do think pulling in multiple models like Gemini has has like a really recent cutoff window and so you can pull like like more informed like real-time information to inform like like tooling use and like what are the optimal like tools or or frameworks or software libraries to use pull in chat CPT for like some deeper research stuff like I I do think approaching vibe coding as this more holistic multimodel problem or exercise is just a generally useful way to approach it as as opposed to just like typing cloud in your terminal and start like just just jumping into coding right away.

I do think planning is a material element of how you produce something great from a vibe coding kind of session versus not. So yeah, I think planning is important. Like I said, I think context is super important.

I think if not using git then establishing at least just like a simple like file system within any sort of like repo you're using or cloud session you're using to for Claude Code to track what it's been doing and to stay up to date across different sessions to devise a robust plan and to update it in real time as it hits snags or as it comes up with different ideas or as you come up with different ideas is a simple way to spin up like a task MD file and a spec.md file and include that alongside your cloud.MD file and tackle your session that way and have it manage things there as it's going through its its its workflow.

There's a bunch of I I think little things like that that go along the way. Think using like plugins in Claude Code is really useful if if like you have like if you're going through a nor like a a regular and commands like if you're going through like a regular flow of like fixing a bug or or having it find bugs spin up like a find bugs uh command spin up a bug fix command that just has a like a prompt and sort of a setup already established so that it can just tackle that in a much more like codified way.

A lot of this stuff is stuff that people are starting to use more and more in these like agenting like coding setups but it does have a meaningful impact on the type of performance you can get from these things.

Link: ki.com

Link: holiday.xyz

Link: infinifi.xyz

Man, I appreciate that. I think anyone who's looking for a primer on, you know, just getting the intermediate level of cloud coding will be able to use a lot of these things. I think the 24 sessions that you mentioned is insane.

Kind of zooming out of Claude Code and getting to just like the use cases of AI holistically. It seems like programming is like the big use case. I know there's obviously the chat and the there's a ton of other things, but it seems like like Claude like we started this kind of I was saying the sh this sentiment shift where like you know Claude was advertising on like NBA stadiums and like advertising on TV but people kind of all knew it wasn't as good as Open AI and it was kind of like nah of course Chach is better. 5.0's 5.1's better. Now it's like, oh wait, was is is this this programming part of the equation so much more valuable?

I guess to both of you guys, Nick, we'll start with you and Rob curious to your perspective as well. Like is programming like the the big thing for AI to to to like figure out like is that the 100x for AI companies or or is there like obviously all these other kind of like verticals that are important, but like is programming the 100x here? And like if so, you know, it it seems like Claude's winning.

I think it's kind of like a heavy somewhat philosophical question to tackle. Like I do think it reshapes all of society. AI is just this kind of new paradigm of compute that will replace most of the the like predecessor forms of compute that society runs on today.

I think when you think about what has the the like most leverage in terms of of input to output coding is has one of the the like largest like leverage factors for lack of a better term in that like if you write some some like post or some manifesto online the level of impact it can have is maybe maybe a like a couple orders of magnitude maybe it's 5 10x whatever it is when you like build a piece of software it has the the potential to like affect hundred thousands x of of value, you know, like we've seen this in in the software industry over the past 20 years in general, right?

When someone builds a piece of code, it's very little capital, very little resources that go in and massive amounts of capital that's created as a function of it. Massive amounts of value that's created as a function of it. So I think just like practically speaking AI being pointed at this problem of software engineering is this super high leverage vertical to focus on and yeah in that regard I think for in a lot of ways claude is winning in that.

I think they made the right strategic decision to to like focus in more on that as opposed to being ultra general and kind of like maneuvering the idea maze more slowly as maybe some of the other companies did and I I think as a function of that their lead does have some like merit to it and some defensibility to it I guess rather. So yeah.

Nick, I really like the idea of leverage factors as far as like where to allocate time, resources, compute, you know, where where can we target and aim AI at? And then, you know, what which ones of those areas are going to have the most impact on society and most value created because it's probably not words.

We've already seen, you know, AI slop on social media and whatnot. And so I I think you know coding and software creation has an incredibly high leverage factor. I also think that like like deep math and scientific discoveries is going to be a little bit later to get off the ground.

Just the sheer amount of research that these things can do and analysis that they can do by going back and probably and they can survey data. They can look through academic papers and they're already starting to make discoveries in some of the toughest math problems that have gone undiscovered. They're starting to crack these things and I do think it takes longer for those to reach the market, but there are significant multiplying effects as far as what source of value those those discoveries can lead to.

I do think it's primarily theoretical in terms of math, science, these different areas of focus, but I would put those areas up there with coding in terms of leverage factors as far as what kind of value can be created for society after those discoveries are made. Just takes longer to come to market.

I agree with that for sure. Nick Emmens, have a great 2026, man. Hope to see you in New York. Thanks for coming on today. Absolutely fun session. AI super cycle. It's cloud coding year. Apparently it's it's vibe coding year. It's the year of vibe coding.

Nick, before you take off, give us like just a quick update on Allora. Like where are you guys at and how are things going?

I mean like we're like the network's live now. We have models coming onto the network. Allora for context is like this kind of model aggregation layer to pull together a bunch of different models enable them to learn off of one another and like collectively solve AI problems in this kind of multimodel format.

We're seeing a lot of integration and a lot of stuff starting to go live in the in this Q1 in the like AIXD sector and especially we're seeing a lot of stuff in prediction markets. There's a number of quite successful agents that are for example trading on Poly Market leveraging like collective price predictions from a bunch of different models on Allora to trade these like 15-minute one-hour long binary options markets that there's work being done around sports betting markets that'll come live later in Q1.

Start to expand into other types of markets. There's a lot of stuff being done around like AI powered strategies or agents on perexes, these money markets in DeFi, a lot of stuff like that because I think like the impact AI can have in the immediate term most effectively within like the crypto sector is in leveling up the quality of market participant that's interacting with a lot of these these like DeFi systems and and these kind of new types of markets that are being spun up.

Q1 and Q2 I think is going to be all about these this like next gen of AI powered DeFi infrastructure and AI defi agents basically.

Others You May Like