taostats
August 29, 2025

Novelty Search August 28, 2025

Samuel, the visionary behind Templar, lays out Covenant.ai's master plan: a "Holy Trinity" of subnets—Templar, Basilica, and Grail—designed to capture the entire intelligence continuum on Bittensor. This isn't just about launching more subnets; it's a strategic imperative to build epiphenomenal products that can leapfrog their centralized counterparts.

Templar: The Pre-Training Frontier

  • "Anyone that believes Mark Zuckerberg or China... are going to give us tools of liberation is an idiot... If we do not keep persevering on being able to compress the library of Alexandria into a weight matrix, we're pretty much screwed because eventually, these guys turn the tap off."
  • "[Sparse LoCo] is really the lynchpin or the keystone to enable decentralized training... it's an absolutely massive innovation that puts the work out of Bittensor really at the forefront of our field."
  • Templar is running the world's largest decentralized training run, scaling a model to 70 billion parameters—a 58x increase in just nine months.
  • The team developed Sparse LoCo, a novel optimizer that achieves over 99% communication compression. This breakthrough makes large-scale distributed training across disparate hardware not just possible, but efficient, outperforming methods from Google's DeepMind.

Basilica: Compute as a Platform, Not a Commodity

  • "Selling decentralized compute is not it; it's hard... The only way to do that is value-added services."
  • Basilica’s strategy transcends being a simple compute rental network. The goal is to build a platform for unique, high-value services that are impossible to replicate in a centralized context.
  • The future roadmap includes verifiable inference and radical hardware efficiency innovations. One moonshot goal is to develop services that allow cheaper A100 GPUs to deliver the performance of H100s, fundamentally breaking the "Jensen tax."

Grail: The Post-Training Intelligence Engine

  • "Grok-4 changed this in our minds because for the first time, the compute budgets for pre-training and post-training were exactly the same."
  • Inspired by the massive compute demands of post-training, Grail was created to tackle reinforcement learning (RL) at scale.
  • A core innovation is a proprietary verification algorithm, also named Grail, that is faster and cheaper than existing methods. It allows validators to prove that miners are generating inferences from the correct model—a critical building block for distributed RL and verifiable inference services on Basilica.

Key Takeaways

  • Covenant.ai’s strategy is a vertically integrated assault on the full AI stack. They are building an end-to-end, decentralized intelligence factory, with Templar forging the base models, Grail refining them with intelligence, and Basilica providing the specialized compute engine.
  • Full-Stack Dominance. The synergy between pre-training (Templar), post-training (Grail), and specialized compute (Basilica) creates a powerful flywheel, positioning them to build models and services end-to-end within their own ecosystem.
  • Research is the Moat. The team’s edge comes from fundamental research breakthroughs like Sparse LoCo and the Grail verification algorithm, creating unique capabilities rather than just competing on price or copying Web2 business models.
  • Beyond Commodity Compute. The vision for Basilica is clear: evolve beyond rentals and offer unique, high-margin services like verifiable inference and compute optimization that solve critical problems for the entire decentralized AI space.

Link: https://www.youtube.com/watch?v=1fHkdt-ylS4

This episode reveals the ambitious three-subnet strategy—Templar, Basilica, and Grail—designed to build a complete, decentralized AI development pipeline on Bittensor, from pre-training to reinforcement learning.

Introduction: A Prophecy for Decentralized AI

  • Samuel frames the team's work not as merely building subnets but as fulfilling a prophecy for a free and open internet. He argues that while previous iterations like the early internet and blockchains have been partially co-opted, Bittensor represents the next, more resilient iteration.
  • He draws a parallel to historical innovations, stating that individual iterations can fail, but the underlying theme—in this case, freedom of information and intelligence—ultimately succeeds.
  • Samuel's perspective is that subnets are not just revenue-generating entities but "tools by which the prophecy of Bittensor is manifested." This philosophical underpinning drives their strategy to build fundamentally new, crypto-native products rather than creating cheaper Web2 copies.

Templar: Pushing the Frontier of Pre-Training

  • The discussion begins with Templar (Subnet 8), the team's decentralized pre-training protocol. Samuel passionately refutes the narrative that pre-training is "dead" or that the community should rely on base models from centralized entities like Meta or China.
  • Strategic Imperative: He warns against depending on corporations that will eventually "turn the tap off" on open-weight models once they achieve market dominance. "Anyone that believes Mark Zuckerberg or China... are going to give us tools of liberation is an idiot."
  • Key Milestone: Templar is currently training a 70 billion parameter model, the largest known decentralized training run. This represents a 58x increase in model size in just nine months, demonstrating rapid scaling capabilities.
  • Future Direction: The next milestone for Templar is not necessarily a larger model but producing a high-quality, production-ready base model that can be used across the Bittensor ecosystem.

The Breakthrough: Sparse Loco and Decentralized Training

  • A significant portion of the conversation focuses on Sparse Loco, a novel optimizer developed by the Templar research team that solves the critical communication bottleneck in distributed training.
  • Technical Innovation: Air, a researcher on the team, explains that Sparse Loco uses top-k compression and 2-bit quantization to achieve over 99% communication compression while improving model performance. This method outperforms existing state-of-the-art optimizers like Google's D-LOKO.
  • Why It Matters: Jake (Const) emphasizes the significance of this breakthrough. Without an efficient communication method, training large models across a decentralized network is impossible due to the massive bandwidth required for merging model weights.
  • Actionable Insight: Jake states, "It is really the lynchpin or the keystone to enable decentralized training." This innovation is a core technological moat for Bittensor, enabling it to train globally competitive models without needing centralized data centers. Investors should view this as a fundamental validation of Bittensor's technical approach.

Basilica: The Compute Substrate

  • Next, Samuel introduces Basilica (Subnet 13), their compute network. He is candid about the challenges of building a decentralized compute marketplace, arguing that simply reselling compute is a difficult business model with thin margins.
  • The Strategy: Basilica's long-term vision is to move beyond raw compute rentals and build high-margin, value-added services on top of its hardware network.
  • Initial Phase: For the next 2-3 months, Basilica will focus on solidifying its core infrastructure as a permissionless compute rental network, using their own subnets (Templar and Grail) to "dogfood" the service.
  • Future Services: Examples of value-added services include verifiable inference and hardware efficiency innovations that could allow cheaper GPUs (like A100s) to perform like more expensive ones (H100s), directly attacking the "Nvidia tax."

Grail: The Future of Reinforcement Learning

  • The final piece of the trilogy is Grail (Subnet 3), a new subnet focused on post-training and reinforcement learning (RL). Reinforcement Learning is a machine learning technique where an AI agent learns to achieve goals by receiving rewards or penalties for its actions.
  • The Motivation: The team recognized the growing importance of post-training, noting that models like Grok-4 dedicated as much compute to RL as to pre-training. Grail aims to decentralize this crucial step in creating intelligent agents.
  • Core Innovation: The team developed a novel algorithm, also named Grail, for verifiable inference. This allows validators to efficiently prove that a miner used the correct model to generate an output by comparing small samples of the model's hidden states. This is faster and cheaper than existing methods.
  • Strategic Application: This verification technology is a startup in itself. It can be offered as a service on Basilica to ensure AI providers are not secretly using cheaper, less capable models—a massive problem in the centralized AI industry.

The Unified Vision: Covenant.ai

  • Samuel concludes by revealing that the three subnets will operate under a unified architecture called Covenant.ai, representing a holistic approach to capturing the entire "intelligence continuum."
  • Escaping Local Minima: He argues that the ecosystem must escape the "PvP" (player-versus-player) mindset of simply gaming emissions. The goal is to invest in deep research to build unique primitives that leapfrog centralized competitors.
  • Integrated Pipeline: The strategy is to create a flywheel:
    1. Templar produces state-of-the-art base models.
    2. Basilica provides the decentralized compute for all processes.
    3. Grail uses reinforcement learning to imbue the base models with advanced intelligence.
  • Actionable Insight: This integrated structure is designed to create a powerful, self-reinforcing ecosystem. Researchers and investors should monitor the synergies between these three subnets, as their combined success could produce a product far greater than the sum of its parts.

Conclusion

This episode outlines a clear, ambitious strategy to build a full-stack, decentralized AI pipeline on Bittensor. The team's focus on fundamental research and integrated systems, rather than isolated products, presents a compelling long-term vision for competing with centralized AI giants.

Others You May Like