This episode reveals how Covenant is building a vertically integrated AI stack to democratize frontier model creation, turning the internet into a decentralized data center and challenging the dominance of centralized tech giants.
Covenant’s Vision: A Vertically Integrated AI Stack
- Templar (Subnet 3): Focuses on pre-training, the foundational and most resource-intensive stage where a model ingests vast amounts of data to build its core intelligence. Templar’s goal is to make this process permissionless and decentralized.
- Grail (Subnet 81): Handles post-training and evaluations. This stage refines a pre-trained model's intelligence by training it on specialized datasets to enhance its skills for specific tasks.
- Basilica (Subnet 39): Provides the underlying decentralized compute infrastructure required for both pre-training and post-training, ensuring the network has sufficient processing power.
Sam emphasizes that after a year of development, the focus is now on creating synergy between these three orders to build a cohesive, powerful platform that is greater than the sum of its parts.
Templar: Decentralizing the AI "Colossus"
- The conversation dives deep into Templar's core innovation: decentralized pre-training. Traditionally, creating a frontier model like Grok requires a massive, centralized data center with hundreds of thousands of GPUs, incurring enormous capital and operational costs. Templar disrupts this by treating the entire internet as a single, distributed data center.
- Mark Jeffrey frames this concept as a "decentralized Colossus," similar to how Napster decentralized file sharing. The collective compute power of the internet will always exceed that of any single data center.
- A key challenge for decentralized training has been speed, as communication latency between non-collocated machines was thought to be a fatal flaw.
- Sam asserts that Templar has proven this thesis wrong. He states, "decentralized training can be as fast as centralized training. We can achieve some comparable and in some cases better results and we can do it at a fraction of the cost."
- By making pre-training accessible, Templar aims to unlock a new wave of innovation, as all foundational knowledge in an AI model is established during this phase.
From Research to Product: Templar's Commercial Strategy
- Current Performance: Templar is currently training a 72B parameter model, nearly twice the size of any other model trained decentrally. Its performance is on par with centralized models like K2-360 and approaching that of Llama 2 70B.
- State-of-the-Art Gap: When asked how far Templar is from models like ChatGPT or Grok, Sam estimates they are at "about 60%" of the way there, acknowledging that the final stretch from 90% to 100% is exponentially more difficult.
- Go-to-Market: The immediate plan is to offer pre-training and fine-tuning as a service via APIs. Covenant is already in discussions with academics, corporations, and governments who are interested in creating their own domain-specific foundational models.
- Market Creation: This service creates an entirely new market. Previously, pre-training a model from scratch was prohibitively expensive (estimated at $500k to $5M per run), limiting it to a handful of tech giants. Templar makes it accessible to a much broader audience.
Navigating `TAO` Flow and New Incentive Mechanisms
- The introduction of `TAO` flow—a mechanism where subnet emissions are tied to the demand for a subnet's token—forces a strategic shift in incentive design. Sam explains how Covenant is re-evaluating its mechanisms to be more efficient and sustainable.
- Templar's New Model: Instead of paying many miners to run the same software, Templar will shift to a "winner-takes-most" model. It will expose core algorithmic problems and reward miners who can significantly optimize the training process (e.g., for speed or loss reduction), creating a highly competitive environment that drives innovation.
- The Problem with Existing Compute Subnets: Sam offers a sharp critique of the prevailing economic model for compute subnets. He argues they operate with upside-down economics, overpaying miners for compute availability and then selling that compute at a discount. "You're burning the candle on both ends," he states, a model he believes is unsustainable in the `TAO` flow era.
- Basilica's Alternative: Basilica introduces a new model where miners must provide compute that is cheaper than a centralized baseline. If they can't, the incentives are burned. If they succeed, the revenue from renting the compute is shared, with the rest used for token buybacks, ensuring a positive economic loop.
The Investor's Dilemma: Valuing a New Frontier
- Mark Jeffrey presses Sam on the long-term value proposition for token holders, drawing an analogy to investing in AOL before the internet was mainstream. The challenge lies in valuing a project that is creating a market that doesn't yet exist.
- Future Vision: In two years, Sam envisions Templar as the world's preeminent solution for creating custom frontier AI models, effectively giving every organization a "personal Colossus."
- Value Accrual: Sam commits that 100% of fees generated from these training services will be used to buy back the subnet's token. The team is actively pursuing enterprise sales with potential customers in London, LA, and Abu Dhabi to sign initial letters of intent.
- The Unification Problem: Sam acknowledges that having three separate tokens for Templar, Grail, and Basilica creates "divided attention" for investors. He reveals they are working on a solution to harmonize the tokens and reduce the cognitive burden, though details are not yet public.
Conclusion: From Technical Proof to Market Dominance
This conversation highlights Covenant's strategic pivot from a research-focused lab to a product-driven organization creating a new market for decentralized AI. For investors and researchers, the key is to monitor Templar's transition into a commercial service and the upcoming token unification, which will clarify how value is captured across its integrated stack.