This episode dissects the escalating AI bubble, revealing how relentless compute expansion fuels innovation while simultaneously igniting a critical privacy crisis—and why decentralized solutions offer a vital counter-narrative.
AI's Relentless Progress & The Agentic Future
- Model improvements, exemplified by Gemini 3, demonstrate enhanced instruction following and complex task execution.
- AI progress stems from continuous advancements across three fronts: data quality, compute infrastructure, and algorithmic efficiency.
- Scaling laws remain intact: more data, compute, and algorithmic improvements directly yield smarter models.
- This progress directly enables the rise of AI agents, which are constrained by underlying model capabilities.
"The scaling laws of AI are still pretty much intact, which means the more data we put in, the more compute we put in, the more improvements in the algorithm we make, the smarter and better the models and the LLMs will be."
Mastering LLM Interaction: Beyond Prompt Engineering
- Modern LLMs understand user intent without extensive, specific instructions, reflecting significant model development.
- Effective prompting prioritizes defining the ultimate desired outcome, allowing the AI to manage intermediate tasks.
- Providing specific context, such as examples of desired outputs, significantly improves result relevance and accuracy.
- Role-playing prompts (e.g., "Act as a world-class marketer") often yield no measurable improvement in output quality.
"What's important is to understand what you want to produce ultimately, what is the outcome, define the outcome very clearly for the AI to follow."
The Brewing AI Privacy Crisis & Decentralized Countermeasures
- Current public LLMs (e.g., ChatGPT, Gemini) collect extensive user data, primarily for model training, a process largely invisible to users.
- This data collection will inevitably lead to targeted advertising and monetization, making privacy concerns tangible for consumers.
- Centralized platforms face legal obligations to share user data with authorities, highlighting a fundamental privacy vulnerability.
- Near AI offers a private chat solution utilizing Trusted Execution Environments (TEEs) and Zero-Knowledge (ZK) proofs, encrypting user data to prevent leakage or government access.
"The model is going to start to have a very good idea of who you are, what your interests are, what your likes are. And naturally, as we've seen with every kind of business model on the internet, we're going to see them start to sell stuff to you."
The Agentic Future: Data, Reliability, and Real-World Action
- AI agents excel at research and coding due due to clear success metrics and structured data.
- The primary barrier to autonomous real-world actions (e.g., booking flights) remains reliability; chained tasks amplify failure rates from even small hallucination percentages.
- A critical data gap exists: models lack sufficient training data showing humans interacting with digital interfaces (clicking buttons, navigating screens).
- Significant data collection and model training are underway, suggesting 2025 will see substantial progress in agent capabilities for real-world tasks.
"If you chain all these together, then you can see that over a series of actions, the failure rate becomes higher and higher."
The AI Bubble, Compute Race, and Market Dynamics
- Free markets drive AI innovation, with private capital funding the extremely expensive compute and data center infrastructure required for progress.
- The industry's pursuit of "super intelligence" fuels a massive, ongoing data center race, with companies continuously investing in compute.
- The "bubble" risk hinges on whether AI scaling laws eventually plateau; if increased compute no longer yields significant model improvements, market sentiment could shift.
- No single company currently monopolizes AI; diverse players maintain competitive model capabilities, mitigating immediate ethical concerns about access.
"Everyone just has to play the game. It's like musical chairs. You just have to keep walking, you have to keep building, you have to keep racing to build the biggest data center and accumulate as many chips as you can."
Decentralized AI: Traction Signals and Sustainable Models
- Evaluating crypto-AI projects requires assessing product-market fit, user adoption, and developer activity on infrastructure protocols.
- The sector remains early, with many teams building foundational infrastructure for decentralized compute and agentic payments.
- Sustainable business models and revenue generation are crucial, given the high cost of AI inference.
- Bit Tensor (TAO) functions as an incubator for numerous AI subnets, offering exposure to diverse AI startups, though economic sustainability remains a challenge for many.
"Overall, I would say that we are still in the pretty early phase of crypto and AI. A lot of teams have been focused on building infrastructure."
Investor & Researcher Alpha
- Capital Movement: Investors direct capital into compute infrastructure, decentralized privacy solutions, and early-stage AI agent development.
- New Bottleneck: The critical constraint for AI agents is the lack of reliable, real-world action data for training.
- Obsolete Research: Overly complex "prompt engineering" becomes less relevant as LLMs demonstrate greater intuitive understanding.
Strategic Conclusion
AI's rapid scaling drives both unprecedented innovation and critical challenges in privacy and market concentration. The industry must balance aggressive compute expansion with the development of robust, privacy-preserving, and decentralized AI solutions to ensure equitable access and mitigate systemic risks.