This episode reveals the ambitious three-subnet strategy—Templar, Basilica, and Grail—designed to build a complete, decentralized AI development pipeline on Bittensor, from pre-training to reinforcement learning.
Introduction: A Prophecy for Decentralized AI
- Samuel frames the team's work not as merely building subnets but as fulfilling a prophecy for a free and open internet. He argues that while previous iterations like the early internet and blockchains have been partially co-opted, Bittensor represents the next, more resilient iteration.
- He draws a parallel to historical innovations, stating that individual iterations can fail, but the underlying theme—in this case, freedom of information and intelligence—ultimately succeeds.
- Samuel's perspective is that subnets are not just revenue-generating entities but "tools by which the prophecy of Bittensor is manifested." This philosophical underpinning drives their strategy to build fundamentally new, crypto-native products rather than creating cheaper Web2 copies.
Templar: Pushing the Frontier of Pre-Training
- The discussion begins with Templar (Subnet 8), the team's decentralized pre-training protocol. Samuel passionately refutes the narrative that pre-training is "dead" or that the community should rely on base models from centralized entities like Meta or China.
- Strategic Imperative: He warns against depending on corporations that will eventually "turn the tap off" on open-weight models once they achieve market dominance. "Anyone that believes Mark Zuckerberg or China... are going to give us tools of liberation is an idiot."
- Key Milestone: Templar is currently training a 70 billion parameter model, the largest known decentralized training run. This represents a 58x increase in model size in just nine months, demonstrating rapid scaling capabilities.
- Future Direction: The next milestone for Templar is not necessarily a larger model but producing a high-quality, production-ready base model that can be used across the Bittensor ecosystem.
The Breakthrough: Sparse Loco and Decentralized Training
- A significant portion of the conversation focuses on Sparse Loco, a novel optimizer developed by the Templar research team that solves the critical communication bottleneck in distributed training.
- Technical Innovation: Air, a researcher on the team, explains that Sparse Loco uses top-k compression and 2-bit quantization to achieve over 99% communication compression while improving model performance. This method outperforms existing state-of-the-art optimizers like Google's D-LOKO.
- Why It Matters: Jake (Const) emphasizes the significance of this breakthrough. Without an efficient communication method, training large models across a decentralized network is impossible due to the massive bandwidth required for merging model weights.
- Actionable Insight: Jake states, "It is really the lynchpin or the keystone to enable decentralized training." This innovation is a core technological moat for Bittensor, enabling it to train globally competitive models without needing centralized data centers. Investors should view this as a fundamental validation of Bittensor's technical approach.
Basilica: The Compute Substrate
- Next, Samuel introduces Basilica (Subnet 13), their compute network. He is candid about the challenges of building a decentralized compute marketplace, arguing that simply reselling compute is a difficult business model with thin margins.
- The Strategy: Basilica's long-term vision is to move beyond raw compute rentals and build high-margin, value-added services on top of its hardware network.
- Initial Phase: For the next 2-3 months, Basilica will focus on solidifying its core infrastructure as a permissionless compute rental network, using their own subnets (Templar and Grail) to "dogfood" the service.
- Future Services: Examples of value-added services include verifiable inference and hardware efficiency innovations that could allow cheaper GPUs (like A100s) to perform like more expensive ones (H100s), directly attacking the "Nvidia tax."
Grail: The Future of Reinforcement Learning
- The final piece of the trilogy is Grail (Subnet 3), a new subnet focused on post-training and reinforcement learning (RL). Reinforcement Learning is a machine learning technique where an AI agent learns to achieve goals by receiving rewards or penalties for its actions.
- The Motivation: The team recognized the growing importance of post-training, noting that models like Grok-4 dedicated as much compute to RL as to pre-training. Grail aims to decentralize this crucial step in creating intelligent agents.
- Core Innovation: The team developed a novel algorithm, also named Grail, for verifiable inference. This allows validators to efficiently prove that a miner used the correct model to generate an output by comparing small samples of the model's hidden states. This is faster and cheaper than existing methods.
- Strategic Application: This verification technology is a startup in itself. It can be offered as a service on Basilica to ensure AI providers are not secretly using cheaper, less capable models—a massive problem in the centralized AI industry.
The Unified Vision: Covenant.ai
- Samuel concludes by revealing that the three subnets will operate under a unified architecture called Covenant.ai, representing a holistic approach to capturing the entire "intelligence continuum."
- Escaping Local Minima: He argues that the ecosystem must escape the "PvP" (player-versus-player) mindset of simply gaming emissions. The goal is to invest in deep research to build unique primitives that leapfrog centralized competitors.
- Integrated Pipeline: The strategy is to create a flywheel:
- Templar produces state-of-the-art base models.
- Basilica provides the decentralized compute for all processes.
- Grail uses reinforcement learning to imbue the base models with advanced intelligence.
- Actionable Insight: This integrated structure is designed to create a powerful, self-reinforcing ecosystem. Researchers and investors should monitor the synergies between these three subnets, as their combined success could produce a product far greater than the sum of its parts.
Conclusion
This episode outlines a clear, ambitious strategy to build a full-stack, decentralized AI pipeline on Bittensor. The team's focus on fundamental research and integrated systems, rather than isolated products, presents a compelling long-term vision for competing with centralized AI giants.