The Rollup
December 14, 2025

Teng Yan: How To Capitalize on The Biggest AI Bubble Ever (...And Who's Winning Already)

AI is in a relentless growth phase, fueled by fundamental scaling laws that continue to yield smarter models. This progress drives the rise of AI agents, but their real-world utility hinges on overcoming reliability challenges and acquiring new forms of training data. Meanwhile, the industry grapples with privacy concerns and the economic sustainability of decentralized alternatives.

The Unstoppable Engine of AI Progress

  • “If you look at what goes into an AI model, there are probably three main components: There is the data, there is the compute, and then there's the actual algorithm itself. And all three of these frontiers are seeing constant improvements.”
  • Scaling Laws Hold: AI model advancements are not slowing. Improvements in data quality and volume (including synthetic data), compute power (massive data centers), and algorithmic efficiency consistently produce more capable LLMs. Think of it like a chef continually refining ingredients, upgrading the kitchen, and perfecting techniques to make a better dish every time.
  • Agentic Future: Better base models directly translate to more sophisticated AI agents. These agents will soon handle more complex, multi-step tasks with greater focus and over longer durations.
  • The Reliability Hurdle: The primary barrier for agents performing real-world actions (like booking a flight) is reliability. Even a 1-2% hallucination rate, when chained across multiple steps, leads to unacceptable failure rates.
  • New Data Needed: A critical missing piece for advanced agents is specific training data showing humans interacting with computers—clicking buttons, navigating UIs. This "human-computer interaction" data is essential for agents to operate beyond their internal LLM capabilities.

Mastering the Nuances of AI Interaction

  • “The different models actually have quite different personalities. So the same prompt, whether you use it in ChatGPT or Gemini or Grok, you could get quite different answers.”
  • Prompt Engineering Evolves: The early hype around "prompt engineering" is diminishing. Modern LLMs increasingly understand user intent even with concise instructions, reflecting significant model progress.
  • Outcome-First Prompting: The most effective prompting defines the desired outcome clearly. For complex tasks, provide specific, sequential instructions. For general queries, allow the model to reason and curate information.
  • Context is Crucial: Supplying relevant context (e.g., examples of desired output, specific filtering criteria) dramatically improves model accuracy and relevance. Imagine giving a chef a clear picture of the final dish and your dietary restrictions, rather than just saying "make dinner."
  • Model Personalities: Different LLMs (ChatGPT, Gemini, Grok) exhibit distinct strengths. Grok is noted for creativity but can be "unhinged," while Gemini excels at following complex sequences.

Privacy, Decentralization, and the Compute Arms Race

  • “What's going to happen pretty soon... the model is going to start to have a very good idea of who you are, what your interests are, what your likes are. And naturally, as we've seen with every kind of business model on the internet, we're going to see them start to sell stuff to you.”
  • Impending Privacy Crisis: Centralized AI platforms collect vast user data. This data will inevitably be monetized through targeted advertising, making privacy a tangible concern for consumers.
  • Blockchain's Counter: Decentralized AI (like Near's private chat) offers a solution by encrypting data, preventing its use for training or sharing. This is like keeping your diary in a locked, personal vault, not a public library.
  • The Compute Arms Race: The "AI bubble" is fueled by the belief that more compute leads to superintelligence. This drives a relentless race to build larger data centers and acquire more chips. This "musical chairs" of compute will continue until the scaling laws break.
  • Decentralized Sustainability: Decentralized AI projects (like Bit Tensor subnets) must demonstrate product-market fit, developer activity, and viable revenue models to achieve economic sustainability beyond token subsidies. Bit Tensor acts as an "incubator" for many AI startups, with some subnets showing early revenue.

Key Takeaways:

  • Compute is King (for now): The race for compute and data center capacity will intensify until the fundamental scaling laws of AI hit a wall.
  • Agents are Coming, with Caveats: Expect significant agentic progress in 2026, but real-world, fully autonomous agents require breakthroughs in reliability and new human-computer interaction data.
  • Privacy as a Differentiator: Decentralized AI offering true data privacy will become a critical value proposition as centralized platforms inevitably monetize user data.

Podcast Link: Link

This episode dissects the escalating AI bubble, revealing how relentless compute expansion fuels innovation while simultaneously igniting a critical privacy crisis—and why decentralized solutions offer a vital counter-narrative.

AI's Relentless Progress & The Agentic Future

  • Model improvements, exemplified by Gemini 3, demonstrate enhanced instruction following and complex task execution.
  • AI progress stems from continuous advancements across three fronts: data quality, compute infrastructure, and algorithmic efficiency.
  • Scaling laws remain intact: more data, compute, and algorithmic improvements directly yield smarter models.
  • This progress directly enables the rise of AI agents, which are constrained by underlying model capabilities.

"The scaling laws of AI are still pretty much intact, which means the more data we put in, the more compute we put in, the more improvements in the algorithm we make, the smarter and better the models and the LLMs will be."

Mastering LLM Interaction: Beyond Prompt Engineering

  • Modern LLMs understand user intent without extensive, specific instructions, reflecting significant model development.
  • Effective prompting prioritizes defining the ultimate desired outcome, allowing the AI to manage intermediate tasks.
  • Providing specific context, such as examples of desired outputs, significantly improves result relevance and accuracy.
  • Role-playing prompts (e.g., "Act as a world-class marketer") often yield no measurable improvement in output quality.

"What's important is to understand what you want to produce ultimately, what is the outcome, define the outcome very clearly for the AI to follow."

The Brewing AI Privacy Crisis & Decentralized Countermeasures

  • Current public LLMs (e.g., ChatGPT, Gemini) collect extensive user data, primarily for model training, a process largely invisible to users.
  • This data collection will inevitably lead to targeted advertising and monetization, making privacy concerns tangible for consumers.
  • Centralized platforms face legal obligations to share user data with authorities, highlighting a fundamental privacy vulnerability.
  • Near AI offers a private chat solution utilizing Trusted Execution Environments (TEEs) and Zero-Knowledge (ZK) proofs, encrypting user data to prevent leakage or government access.

"The model is going to start to have a very good idea of who you are, what your interests are, what your likes are. And naturally, as we've seen with every kind of business model on the internet, we're going to see them start to sell stuff to you."

The Agentic Future: Data, Reliability, and Real-World Action

  • AI agents excel at research and coding due due to clear success metrics and structured data.
  • The primary barrier to autonomous real-world actions (e.g., booking flights) remains reliability; chained tasks amplify failure rates from even small hallucination percentages.
  • A critical data gap exists: models lack sufficient training data showing humans interacting with digital interfaces (clicking buttons, navigating screens).
  • Significant data collection and model training are underway, suggesting 2025 will see substantial progress in agent capabilities for real-world tasks.

"If you chain all these together, then you can see that over a series of actions, the failure rate becomes higher and higher."

The AI Bubble, Compute Race, and Market Dynamics

  • Free markets drive AI innovation, with private capital funding the extremely expensive compute and data center infrastructure required for progress.
  • The industry's pursuit of "super intelligence" fuels a massive, ongoing data center race, with companies continuously investing in compute.
  • The "bubble" risk hinges on whether AI scaling laws eventually plateau; if increased compute no longer yields significant model improvements, market sentiment could shift.
  • No single company currently monopolizes AI; diverse players maintain competitive model capabilities, mitigating immediate ethical concerns about access.

"Everyone just has to play the game. It's like musical chairs. You just have to keep walking, you have to keep building, you have to keep racing to build the biggest data center and accumulate as many chips as you can."

Decentralized AI: Traction Signals and Sustainable Models

  • Evaluating crypto-AI projects requires assessing product-market fit, user adoption, and developer activity on infrastructure protocols.
  • The sector remains early, with many teams building foundational infrastructure for decentralized compute and agentic payments.
  • Sustainable business models and revenue generation are crucial, given the high cost of AI inference.
  • Bit Tensor (TAO) functions as an incubator for numerous AI subnets, offering exposure to diverse AI startups, though economic sustainability remains a challenge for many.

"Overall, I would say that we are still in the pretty early phase of crypto and AI. A lot of teams have been focused on building infrastructure."

Investor & Researcher Alpha

  • Capital Movement: Investors direct capital into compute infrastructure, decentralized privacy solutions, and early-stage AI agent development.
  • New Bottleneck: The critical constraint for AI agents is the lack of reliable, real-world action data for training.
  • Obsolete Research: Overly complex "prompt engineering" becomes less relevant as LLMs demonstrate greater intuitive understanding.

Strategic Conclusion

AI's rapid scaling drives both unprecedented innovation and critical challenges in privacy and market concentration. The industry must balance aggressive compute expansion with the development of robust, privacy-preserving, and decentralized AI solutions to ensure equitable access and mitigate systemic risks.

Others You May Like