Ventura Labs
June 4, 2025

Sam Dare: Templar Bittensor Subnet 3, Decentralized Pretraining Models, Open-Source AI | Ep. 46

Sam Dare, the visionary behind Templar on Bittensor Subnet 3, joins Ventura Labs to unpack his mission: forging an open, anti-fragile AI ecosystem through decentralized, permissionless training of foundational models. This episode explores Templar's groundbreaking approach, the vital role of miners, and the fight for digital sovereignty against AI behemoths.

The Crusade for Open AI

  • "Decentralized training is humanity's last stand... Creating a base model is literally the cosmic energy of the universe. It's harnessing fire."
  • "We can't assume that the hyperscalers—Google, Meta, Alibaba—[won't] burn our books. They will rewrite history if we let them do that."
  • Sam Dare frames decentralized AI training as a critical battle for humanity's freedom and sovereignty, pushing back against the centralized control of tech giants.
  • He views the ability to create foundational AI models as a fundamental power, akin to "harnessing fire," warning that monopolization by entities like Google or Meta could lead to information manipulation and a rewritten history.
  • Templar’s core mission transcends mere technology; it’s about building an open, permissionless, and anti-fragile AI ecosystem to secure digital self-determination.

Templar's Blueprint on Bittensor

  • "Templar's 1.2 billion parameter model... will claim its place in history as the first time in the world [permissionless decentralized training] has ever been done."
  • "Our design philosophy is pure Bittensor: create the incentive mechanism, the validation mechanism, harden it... and outsource innovation. Through that innovation, emergence happens."
  • Templar’s Subnet 3 on Bittensor pioneered permissionless decentralized training with its initial 1.2 billion parameter model, a size strategically chosen for rapid iteration, learning, and establishing anti-fragility.
  • The approach hinges on robust incentive and validation mechanisms (like "Halo scoring" and the "Gauntlet" system) to align miners effectively, fostering emergent innovation from the distributed network.
  • They utilize techniques such as "Demo" (decoupled momentum) for efficient communication of gradients, prioritizing open participation and resilience.

The Unfolding AI Economy

  • "My thesis is that eventually, everything will collapse into training, and those that are able to train have the most valuable resources."
  • "People will buy our tokens to train on it because training a large model... on Templar will be the best place you can do it."
  • Dare posits that pre-training foundational models will become the central, most valuable activity in the AI landscape, gradually absorbing other auxiliary AI services.
  • Templar’s monetization strategy centers on entities purchasing its tokens to access decentralized compute. This enables them to train specialized foundational models, meeting a rising demand that goes beyond generic fine-tuning.
  • This positions Templar to serve businesses and institutions seeking bespoke AI capabilities, as the need for specialized, controllable intelligence intensifies.

The Soul of the Subnet: Community and Conviction

  • "Honest, transparent conversations... speaking from the heart is the most powerful form of communication."
  • "The purpose of power is to be able to give it away. Templar is successful when I can transfer ownership... [to ensure] we've created something of enduring value."
  • Building Templar involved a significant cultural shift, moving from an initially adversarial relationship with miners to a deeply collaborative one, achieved through radical transparency and genuine community engagement.
  • Dare redefines the subnet owner's role as akin to a "community manager," emphasizing that true, enduring value is co-created when miners feel a sense of ownership and shared purpose.
  • Ultimate success for Templar is envisioned not through token price surges, but by creating a self-sustaining, valuable ecosystem that can outlive its founders, ideally through decentralized governance.

Key Takeaways:

  • Decentralized AI training is more than a technological frontier; it's a burgeoning movement for digital freedom and distributed innovation. Bittensor's architecture, with its emphasis on incentive alignment, allows for powerful emergent properties as miners become co-creators. The future value in AI is increasingly tied to the capacity for decentralized pre-training.
  • Decentralized Pre-training is AI's Liberty Bell: Control over foundational models is control over future narratives; open, permissionless networks are the defense.
  • Incentives Fuel Collective Genius: Bittensor's core strength lies in aligning distributed miners through sophisticated economic games, turning individual efforts into collective super-intelligence.
  • Training is the New AI Moat: As AI capabilities consolidate, the sovereign ability to train bespoke, foundational models will become the ultimate strategic asset for individuals and organizations.

Podcast Link: https://www.youtube.com/watch?v=EkOJIluzOsI

This episode reveals how Templar is pioneering truly decentralized AI model training on BitTensor, positioning it as a critical fight for digital freedom against the dominance of hyperscalers.

Episode Introduction

This episode dives deep into Templar's mission to build a decentralized, permissionless AI training ecosystem on BitTensor, exploring the technical challenges, the philosophical drive for freedom, and the strategic implications for the future of open-source AI. Sam, the visionary behind Templar, shares his journey and insights into creating anti-fragile AI, challenging the dominance of centralized entities like Google.

The Viral SoundCloud and Templar's Ethos

  • The discussion kicks off with a surprising viral SoundCloud track, originating from a candid voice note from Jake (a mentor to Sam) expressing frustration during a fundraising discussion. Sam explains this moment became a watershed, highlighting the imposter syndrome and stress common among builders in the space.
  • Sam decided to release the voice note, and later an EDM remix, as a symbol for the entire BitTensor community. He emphasizes, "I don't think it belongs to Templar, it belongs to the whole of BitTensor." reflecting a shared struggle and the need for perseverance.
  • Speaker Analysis (Sam): Sam's candidness about his own insecurities and the raw emotion behind the SoundCloud track establishes an authentic and relatable tone, emphasizing the human element in high-tech development.
  • Actionable Insight: The viral moment underscores the power of authentic community engagement and shared vulnerability in building strong, resilient project ecosystems, a factor investors should note in project evaluations.

Templar's Vision: Decentralized AI as Humanity's Last Stand

  • Sam, coming from a blockchain background rather than traditional AI, frames his vision for Templar as a fight for "sovereignty, agency, and freedom." He views decentralized AI training as a critical iteration in this ongoing theme, potentially "humanity's last stand" against centralized control.
  • He argues that creating base models—foundational AI models from which other AIs are built—is akin to "harnessing fire" and should not be monopolized by hyperscalers (large cloud computing providers like Google, Meta, Alibaba). Sam warns, "They will burn our books. They will rewrite history if we let them do that."
  • BitTensor is a decentralized network that incentivizes the creation and operation of AI models. Templar operates as a subnet within this ecosystem.
  • Actionable Insight: Investors and researchers should consider the long-term geopolitical and societal implications of AI development. Projects like Templar, focused on open and permissionless base model creation, offer a counter-narrative to centralized AI dominance, representing a strategic hedge.

The Claim: Templar as the Only True Decentralized Training Run

  • Sam asserts that Templar prioritizes "permissionlessness, artifactness, and no centralized controls" above all else. This foundational principle, he believes, distinguishes Templar.
  • He explains that his non-AI background helped him focus on these core tenets, which are "something that has to be built into the protocol," rather than features that can be hired for later.
  • The initial choice of a 1.2 billion parameter model, though smaller than some competitors, was a strategic decision for rapid iteration and debugging. Sam states, "in my eyes and the eyes of my community, it will claim this place in history as the first time in the world it has ever been done."
  • Actionable Insight: When evaluating decentralized AI projects, scrutinize the foundational architecture for genuine permissionlessness and resistance to censorship or control. The ability to iterate quickly on smaller models can be a sign of a robust development process.

The Mechanics of Decentralized Training on Templar (Subnet 3)

  • Sam describes subnet owners as "game masters" or "dungeon masters" who design the incentive game for miners and validators.
  • Miners on Templar pick a page from a massive dataset, train on it, and broadcast their updated model (gradients). Gradients are the adjustments made to an AI model's parameters during training to reduce errors.
  • Validators score miners based on criteria like preventing cheating (e.g., weight copying) and improving the model's loss (a measure of error) more than their own baseline.
  • Initially, Templar used a simple loss improvement score. However, as training progressed and loss improvements naturally diminished, miners weren't getting scored adequately. This led to the development of an ELO scoring system (a method for calculating the relative skill levels of players in competitor-versus-competitor games) by team member Joel, pitting miners against each other.
  • Actionable Insight: The evolution of Templar's validation mechanism highlights the complexity of incentive design in decentralized systems. Investors should look for teams that can adapt and refine these mechanisms in response to real-world network behavior.

Demo: Efficient Communication in Decentralized Training

  • Templar utilizes Demo (Decoupled Momentum), a technique developed by Nous Research, for communication. Sam clarifies, "Demo is just communications... Demo's not us."
  • Demo is a top-k sparsification compression method. Instead of broadcasting the entire gradient (all model updates), it analyzes the gradient using a formula like DCT (Discrete Cosine Transform) to identify the "most important layers" or "fast-moving components of momentum."
  • Only these compressed bits representing the most significant changes are sent, reducing communication overhead, which is crucial in decentralized networks. Transformers, a common AI model architecture, often have redundant or "dead" layers, making such compression effective.
  • Actionable Insight: Communication efficiency is a major bottleneck in decentralized AI training. Innovations like Demo are vital for scalability. Researchers should explore and contribute to such compression and communication strategies.

Synchronous Training and Iteration Speed

  • Templar currently uses a synchronous training approach, where miners train for a step and then communicate that step's results before proceeding.
  • Sam acknowledges that asynchronous training (training for the next step while communicating the previous one) can be more efficient by overlapping operations. Templar is moving towards this but is currently focused on more fundamental problems.
  • The choice of a 1.2 billion parameter model was key for rapid iteration. Sam states, "I could break it five times a day and launch it again five times a day." This allowed for quick identification and fixing of issues with limited resources.
  • Actionable Insight: The ability to iterate quickly is a significant advantage in the fast-moving AI space. Projects that can rapidly test, break, and fix their systems are more likely to overcome early challenges.

The Three Pillars of Decentralized Training: Communication, Incentives, Verification

  • Sam references Seth Bloomberg from Unsupervised Capital, who highlights that while many focus on communication efficiency, incentives and verification are equally hard problems in decentralized training.
  • Templar has focused heavily on the incentive layer. Verification, particularly using zk (Zero-Knowledge) proofs, presents challenges. zkML (Zero-Knowledge Machine Learning) enables private verification of AI models. Sam worries that zk approaches might constrain miner expressiveness and emergence, a core strength of BitTensor.
  • He mentions that Jensen (likely referring to a research group or individual) has done interesting work on verification with "Verde."
  • Actionable Insight: Investors and researchers should assess decentralized AI projects across all three pillars: communication, incentives, and verification. A weakness in one can undermine the entire system. The trade-offs between verifiability and network expressiveness are critical research areas.

Scaling Challenges: Communications and Model Parallelism

  • The primary scaling wall for Templar is communications.
  • Templar currently uses data parallel training, where each participating node (accelerator) has a full copy of the model. This approach doesn't scale well beyond a certain point (e.g., 70 nodes, according to Sam). FSDP (Fully Sharded Data Parallel) is a technique to make data parallelism more scalable by sharding model parameters, gradients, and optimizer states across data parallel workers.
  • To reach very large models, Templar will likely need to implement tensor parallelism, a model parallelism technique where individual layers or tensors of a model are split across multiple devices. Sam notes this is "not sexy to design."
  • Actionable Insight: The transition from data parallelism to more complex model parallelism techniques like tensor parallelism is a significant technical hurdle for decentralized training systems. Progress in this area is key to training larger, more competitive models.

War Stories: The Miner-Subnet Owner Dynamic

  • Sam shares candid "war stories" of his early interactions with miners, describing the first two months as intense "PvP" (player versus player). He recounts being exploited by miners on Christmas Day and New Year's Eve due to vulnerabilities in the system.
  • A turning point came when Const (likely a prominent figure in BitTensor) publicly highlighted Sam's struggles, leading Sam to ask the community for help. This shift from an adversarial to a collaborative relationship was crucial. "The miners became an extension... There are at least 15 devs that helped with it."
  • Sam emphasizes that subnet owners are more like "community managers." The key is to create something of value that miners want to be part of and hold, fostering a sense of shared ownership.
  • Speaker Analysis (Sam): Sam's raw and honest recounting of his struggles with miners and his eventual realization of the importance of community collaboration provides a powerful lesson in decentralized governance and leadership.
  • Actionable Insight: The relationship between network operators (miners/validators) and protocol developers is critical. Projects that foster genuine community and shared ownership are more likely to build resilient and innovative ecosystems.

Tokenized Model Ownership and Monetization

  • Sam strongly believes in tokenized model ownership. His core thesis is that "eventually everything will collapse into training and those that are able to train have the most valuable resources."
  • He sees a growing demand for specialized foundational models, beyond what simple fine-tuning can offer. Templar aims to be the platform where labs, institutions, and businesses come to train these large, specialized models, using Templar's token to access compute.
  • While token-gated models are a possibility, Sam notes the complexities, comparing it to copyright enforcement.
  • Actionable Insight: The primary value proposition for Templar's token appears to be access to decentralized training compute for large, specialized foundational models. Investors should track the demand for such services and Templar's ability to deliver them competitively.

The Inevitable Absorption by Foundational Models

  • Sam predicts that many auxiliary AI services and tools will be absorbed into foundational models. He cites examples like Cursor's potential issues with API access from model providers like Anthropic and OpenAI.
  • "The platform in the age of AI is give us all your data, use our APIs, and get the [ __ ] out of here." This highlights the power of those who control the base models.
  • He criticizes the "openish source" nature of some models, like Meta's, which require manual approval for access to artifacts like tokenizers.
  • Actionable Insight: Researchers and investors should be aware of the trend towards consolidation of AI capabilities within foundational models. This could impact the viability of standalone AI tools and applications, making access to and control over training infrastructure even more critical.

Open-Source Models and the Hardware Bottleneck

  • Sam mentions DeepSeek as a "pretty good" open-source model. He acknowledges that most still use closed-source models for cutting-edge performance.
  • DeepSeek employs a Mixture of Experts (MoE) architecture. In an MoE model, instead of a single large feed-forward network (a type of neural network layer), multiple smaller "expert" networks are used. During inference, a routing mechanism directs input tokens to only a subset of these experts, making them computationally efficient for their size. DeepSeek also pioneered techniques like multi-head attention for MoEs and auxiliary loss-free load balancers for their routers.
  • A critical, often overlooked, bottleneck is hardware. Sam emphasizes, "there's also hardware... we need to push on hardware too." He believes competitive open-source chips are vital for accelerating open-source AI, as giants like Google have their own custom TPUs (Tensor Processing Units).
  • He had previously considered a subnet for open-source kernel design. Kernels are small, efficient programs that run on GPUs to perform specific computations.
  • Actionable Insight: The development of open-source hardware and optimized kernels is a crucial frontier for leveling the playing field in AI. Investors and researchers should monitor and support initiatives in this area.

Storage Choices: R2 Buckets and Pragmatism

  • Templar uses Cloudflare R2 buckets for storage. Sam explains this is a pragmatic choice: "You decide what you want to fight." R2 offers cheap, high-bandwidth, high-performance storage with reliable timestamps.
  • These timestamps are integrated into Templar's incentive mechanism to prevent cheating (like copying gradients) and enforce performance (e.g., requiring gradients to be submitted within a specific time window).
  • Sam briefly mentions his past attempt at a storage subnet, highlighting the immense difficulty of true decentralized storage.
  • Actionable Insight: Decentralized projects often need to make pragmatic choices about which parts of the stack to decentralize immediately. Focusing decentralization efforts on core value propositions while leveraging existing infrastructure for other components can be a sensible strategy.

Preventing Malicious Gradients: A Future Challenge

  • Templar is not yet fully addressing the problem of sophisticated malicious gradient injection (the "verification problem" for strong adversarial models, e.g., nation-state attacks).
  • Their current adversarial model assumes a "weak adversary" primarily trying to manipulate scores for economic gain.
  • Sam views robust verification against strong adversaries as "tomorrow's problem," acknowledging work by others like "Jensen" with Verde and potential zk solutions.
  • Actionable Insight: Security against sophisticated attacks is a long-term challenge for decentralized AI. Researchers should focus on developing robust and scalable verification methods for training integrity.

The Real Competitor: Google

  • Sam identifies Google, not OpenAI or Anthropic, as the ultimate competitor. "Google is a Leviathan. Google has all the data. Google has all the hardware."
  • He urges the BitTensor community to stop "PvP" and collaborate to compete against centralized giants. "Every moment we spend, you know, fighting BitTensor... we're closer to dying."
  • Actionable Insight: The competitive landscape in AI is dominated by a few large players with significant resource advantages. Decentralized alternatives require substantial collaboration and focus to offer a viable alternative.

Multimodal Models and Future Directions

  • Templar is currently focused on language models and not yet on multimodal models (models that can process multiple types of data like text, images, video).
  • However, Sam sees this as a future possibility once the core training infrastructure is perfected. "Once you have the platforms, the infrastructure, the talent to do this, you can do almost anything."
  • He mentions Gemini's large context window (though noting the difference between advertised and effective context).
  • Actionable Insight: While specialization is key initially, the underlying infrastructure for decentralized training could eventually support a wide range of AI model types, including multimodal.

Team Building and Hiring: "Vibes" and Trust

  • Sam's hiring process is unconventional, described as "Vibes." His first hire was an old friend, emphasizing trust due to the risk of exploitation in the BitTensor environment.
  • The team is small (currently four full-time, aiming for six) to keep Sam "in the sauce" and avoid excessive management overhead.
  • The success of Templar One has attracted inbound talent that was previously inaccessible.
  • Actionable Insight: In cutting-edge, high-stakes environments, trust and cultural fit ("vibes") can be as important as technical skills in early team building.

Liquid Alpha and Yuma Consensus: A Cautious Approach

  • Sam created Liquid Alpha (presumably an incentive mechanism or feature) early in his BitTensor journey.
  • He expresses caution about implementing new, complex consensus mechanisms like Yuma Consensus (referred to as Yuma 3) if it introduces significant attack surfaces or development overhead. "If it's something that's going to open Templar up to 6 months worth of attacks... it's a very hard decision for us."
  • He balances accountability to Tao (BitTensor's native token) holders and his own token holders, prioritizing long-term value creation for Templar.
  • Actionable Insight: Subnet owners face complex decisions when adopting network-wide upgrades, balancing potential benefits with security risks and development burdens. This highlights the governance challenges in decentralized ecosystems.

Templar's Roadmap: Bigger Models, Verification, Productization

  • The immediate roadmap focuses on:
    • Training bigger models.
    • Improving verification mechanisms.
  • Eventually, Templar will need to focus more on "product work," but the current phase is heavily R&D-focused for at least the next six months.
  • Actionable Insight: Templar's roadmap reflects a phased approach: first, master the core technology of decentralized training at scale, then build products around it.

Ultimate Goal: Enduring Value and Decentralized Stewardship

  • Sam's ultimate goal for Templar is to create something of "enduring value" that can outlast him. "Templar is successful when I can transfer ownership of that owner key to someone or some entity."
  • He envisions Templar becoming an "ideology" or a "religion," sustained by a community that believes in its mission, rather than just a company or token price.
  • He expresses a desire to eventually step back and engage with the BitTensor ecosystem more broadly.
  • Speaker Analysis (Sam): Sam's long-term vision is deeply philosophical, emphasizing the creation of lasting, decentralized value over personal enrichment or control. This conviction is a powerful driving force.
  • Actionable Insight: Projects with a strong, clearly articulated mission beyond short-term financial gains may foster greater long-term community loyalty and resilience.

Advice for Developers: Grit and Community

  • When asked for advice for developers building on BitTensor, Sam's primary answer is "Grit. Absolute grit. 100% grit."
  • He encourages aspiring builders to pick up a subnet's code, read it, keep going, ask questions, and make friends within the community.
  • "It's not about where you're from. It's more about how much you're ready to bleed. That's what BitTensor is."
  • Actionable Insight: Success in pioneering ecosystems like BitTensor requires immense perseverance and a willingness to engage deeply with the technology and the community.

Reflective and Strategic Conclusion

This episode underscores that decentralized AI training is not just a technical challenge but a fight for digital sovereignty. Investors and researchers must track projects like Templar that prioritize permissionless innovation and robust incentive design, as these will define the future of open AI.

Others You May Like