Hash Rate pod - Bitcoin, AI, DePIN, DeFi
May 12, 2025

Hash Rate - Ep 109 - Templar $TAO sn3 - Decentralized AI Training

This episode of Hash Rate features Sam Dar, founder of Templar Subnet 3 on BitTensor, discussing the intricacies of decentralized AI training, the revolutionary impact of the DTO (Decentralized TAO Offering), and the vision for an open, democratized AI future.

The DTO Gauntlet: Aligning Incentives for Long-Term Value

  • "I think it's succeeding massively in aligning incentives... In DTO, you need to create long-term sustainable value before you can even dream of even taking out a sliver."
  • "So across the board every subnet has to step their game up or die... So DTO really incentivizes that."
  • DTO shifts the paradigm from daily TAO payouts (as in the old root network) to compelling subnets to build enduring value before extracting significant rewards.
  • This model fosters a competitive "innovate or perish" environment, pushing all subnet teams to elevate their product quality and strategic vision.
  • It enforces fiscal discipline. Templar, for example, operates on a community-approved budget of 400 TAO per month, highlighting a focus on lean operations and prioritizing R&D.

Templar: Forging the "Linux of AI Training"

  • "We've created the first decentralized permissionless training protocol in the world... instead of having one person come up with 100,000 H100s, can we have each miner source 390 H100s?"
  • "The validators running the H100s, the miners have to beat the validator and beat themselves... that little thing creates that incentive flywheel where miners will throw as much compute, as much innovation as possible."
  • Templar’s mission is to decentralize the creation of large language models by enabling a swarm of globally distributed, smaller-scale data centers (running high-end GPUs like H100s) to collaboratively train AI.
  • It strategically excludes low-power hardware (like Raspberry Pis) to maintain network efficiency, as the system's speed would otherwise be bottlenecked by the slowest participant.
  • A core innovation is its validation mechanism: miners are incentivized to constantly improve their training efficiency to outperform both the validators (who also run H100s) and competing miners, thus outsourcing innovation to the network's edges.

Subnet Economics: Emissions, R&D, and the Revenue Question

  • "The first purpose of DTO is not to enrich moonboys. It's to fund development teams."
  • "I don't think it's unrealistic to expect the protocol to fund us for a year to do the R&D because that's the only way we survive."
  • Sam Dar advocates that DTO's primary function is to subsidize development teams through TAO emissions, enabling them to focus on deep R&D, much like early tech companies burned venture capital to achieve scale before focusing on monetization.
  • For Templar, the long-term revenue model involves offering "pre-training as a service," where enterprises or other labs would purchase Templar’s subnet tokens to access their unique, decentralized AI model training capabilities.
  • There's a strong sentiment against premature optimization for SaaS-like revenue if it distracts from the core, difficult R&D required to build foundational, BitTensor-native technologies.

Key Takeaways:

  • The future of AI development might not solely rest in the hands of centralized giants. Decentralized networks like BitTensor, powered by innovative subnets like Templar, are emerging as formidable contenders to democratize AI. The DTO model, while challenging, is a crucial catalyst, forcing a focus on sustainable innovation and community-aligned incentives.
  • R&D Over Premature Revenue: For ambitious projects like decentralized AI training, protocol-funded R&D (via emissions) is vital; chasing early SaaS revenue can be a fatal distraction from building truly groundbreaking tech.
  • Decentralization as Defense: Templar’s strategy to build permissionless, world-class AI models using a distributed network of high-performance compute (H100s) directly challenges the centralized control of AI giants, aiming to be the "Linux for AI."
  • DTO Mandates Fiscal Grit: The DTO framework forces subnet teams into lean operations, demanding transparency with their token-holding communities and a relentless focus on delivering substantial, long-term value.

For further insights and detailed discussions, watch the full podcast: Link

This episode features Sam Dar of Templar Subnet 3, unpacking the journey of building decentralized AI training, arguing DTO's true value lies in funding deep R&D and aligning incentives, rather than chasing early revenue.

Introduction to DTO and Early Impressions

  • Sam Dar, representing Templar Subnet 3 (SN3), discusses the initial 90 days of DTO (Dynamic TAO Offering), a mechanism in the BitTensor ecosystem that changes how subnets receive TAO (the network's native cryptocurrency) emissions, aiming to align incentives for long-term value creation.
  • Sam believes DTO is "succeeding massively in aligning incentives." He explains that unlike the previous Root Network (the main BitTensor blockchain) model where subnet owners could receive daily TAO payouts, DTO forces a focus on long-term sustainable value.
    • "In DTO, you need to create long-term sustainable value before you can even dream of just of even taking out a slither," Sam states, highlighting the shift in mindset required.
  • This pressure compels all subnets (specialized networks within BitTensor) to elevate their offerings or risk failure, fostering a more competitive and innovative ecosystem.
  • However, Sam notes a concern with market dynamics under DTO, where capital might preferentially flow to new subnets with low liquidity for quick price gains, rather than consistently supporting established, value-generating subnets.

Templar's Financial Realities and DTO's Influence

  • Despite Templar's strong performance on subnet charts, Sam candidly shares his personal financial situation, stating, "I have no money."
  • He recounts Templar's history of receiving minimal emissions until just two weeks before DTO, with those late gains being entirely distributed to miners through bounties. This period taught him to operate leanly, even building Templar V1 single-handedly.
  • DTO, in Sam's view, enforces frugality and wiser capital allocation. Templar's community recently approved a 400 TAO monthly spend from the pool, which Sam considers sufficient for the next three months, emphasizing value delivery over extraction.
    • Mark Jeffrey, the host, notes that 400 TAO (around $175,000 at current prices) is a legitimate burn rate for a small startup.

The Debate: Venture Capital vs. DTO-Funded Development

  • Sam clarifies he doesn't hate venture capital (VC) but cautions that founders, especially those new to BitTensor, must understand VC dynamics. Misaligned expectations or bad deals can demotivate founders and derail projects.
    • He shares a personal anecdote: "I almost sold Templar. I almost sold a million Templ— 200,000 Templar for $5 when we were at $10. I was this close to doing this."
  • Sam argues DTO's primary purpose is to fund development teams, not to enrich short-term speculators. If DTO fails to support development, it implies either teams are inefficient or DTO itself has shortcomings.
  • Mark Jeffrey concurs, viewing DTO as an incentivization mechanism where TAO emissions (TAO tokens distributed by the network) act as subsidies, similar to how Uber used VC funds to subsidize rides, which is a valid strategy for growth.
  • Sam acknowledges a stigma around subnet owners selling their pooled TAO but insists it's vital for investment and sustained value creation, stating, "The only way to create sustained value is through investments."

Templar Explained: The Mechanics of Decentralized AI Training

  • Mark Jeffrey initially describes Templar as a system for creating LLMs (Large Language Models) like ChatGPT by crunching data using a swarm of distributed computers, including home PCs.
  • Sam refines this: Templar focuses on coordinating data centers with high-performance GPUs like NVIDIA H100s, not home computing. He argues the main challenge in large-scale AI training isn't a technical limitation but a "failure of coordination."
    • Volunteer computing often fails because the network speed defaults to the slowest participant (e.g., a Raspberry Pi). Templar sets a minimum compute threshold.
  • Templar's approach is "true collaborative training":
    • Miners source significant compute (e.g., hundreds of H100s each).
    • The system uses Stochastic Gradient Descent (SGD), an algorithm for optimizing AI models. Miners process data "pages," train on them, and broadcast gradients (mathematical indicators of how to improve the model).
    • Validators, also running H100s, sample these gradients. Miners are scored on how much their model's loss (a measure of inaccuracy; lower is better) improves compared to the validator's.
    • This creates an "incentive flywheel," pushing miners to innovate continuously. Sam emphasizes, "In BitTensor, you build your validation system and you outsource innovation out to the edges."
  • A key challenge for subnet owners is managing rational economic actors (miners) who will initially try to exploit the system for maximum incentive, rather than focusing on the intended task. The owner must refine the system to align miner profit with desired outcomes.

Templar vs. Big Tech in AI Training

  • Mark mentions Sam Altman (OpenAI CEO) suggesting a future need for decentralized training.
  • Sam Dar frames Templar's mission as democratizing AI creation, contrasting with the closed-source approach of large corporations like Google and Meta (Facebook). He voices concern over centralized control of powerful AI by entities with questionable track records.
    • "Are these the people you want to give Prometheus fire to? No," Sam asserts, underscoring the ideological drive behind Templar.
  • Templar aims to provide the technology and resources for anyone to build powerful models, ensuring alternatives to Big Tech's offerings. He believes we are hitting limits on how much centralized compute and data can achieve, necessitating new approaches for ongoing knowledge integration.
  • The goal is to create a "Linux of training"—an open, collectively controlled standard.

Templar's Roadmap and Progress

  • Sam estimates Templar is about "18 months" away from being able to train a frontier-level AI model, acknowledging the significant work and dependencies involved. The core "kernel" of their system is permissionless, allowing for acceleration.
  • Templar recently completed training a 1.2 billion parameter model, which is relatively small, but a report on its performance is forthcoming. This demonstrates incremental progress.

Insights on DeepSeek and Model Architectures

  • Sam views DeepSeek's achievements as impressive, noting they "democratized a lot" by open-sourcing techniques potentially used by frontier labs. However, he points out significant "sunken costs" in DeepSeek's development not immediately apparent.
  • DeepSeek utilizes sparse models, specifically Mixture of Experts (MoE). This architecture uses multiple specialized "expert" sub-models, which can be more efficient for inference than traditional dense models where all parameters are active.
  • A major challenge for implementing MoE in a decentralized setting like Templar is the high communication overhead required between the distributed experts.

Templar's Monetization Strategy: Pre-Training as a Service

  • Monetization is a longer-term goal for Templar, with current focus on R&D.
  • The primary future service envisioned is pre-training as a service. This involves offering the capability for others to train large foundation models (base AI models) from scratch.
  • Sam foresees a market where AI labs and companies will buy Templar's SN3 tokens to access their decentralized training platform, especially for creating niche or domain-specific foundation models (e.g., for Netflix or Stripe, the "Collison brothers'" company).
    • "What we envision is that we'll have so much compute and have the best platform for pre-training that foundation labs or frontier labs will buy our tokens to train on our platform."
  • This differs from just fine-tuning (adapting existing models), as building specialized foundation models requires extensive pre-training resources. Templar could also extend its platform for post-training and inference.

Defining Subnet Success Beyond Revenue

  • Sam expresses strong reservations about the BitTensor community's early "fetishizing on revenue." He argues that forcing subnets to become SaaS (Software as a Service) businesses prematurely can lead to suboptimal design choices and an inability to compete with efficient Web2 SaaS offerings.
  • For subnets like Corcel (formerly "Choose," SN1), which offer services, Sam questions how long it will take for their revenue to genuinely offset their TAO emissions, given the current subsidized nature of their offerings.
  • He advocates for subnets to become "truly BitTensor native," leaning into the "full chaos of miners" and harnessing their collective power. This deep integration with BitTensor's incentive mechanisms is, for Sam, a truer measure of a subnet's value and purpose.
    • "If you cannot lean into the full chaos of miners and harness them, then I don't think your subnet should be on BitTensor," he states firmly.

Funding R&D: The Role of Protocol Emissions

  • Sam believes it's realistic for the BitTensor protocol (via TAO emissions) to fund a subnet's R&D for approximately a year. This period is crucial for deep research and development, allowing teams to build genuinely innovative and robust systems.
  • He likens this phase to OpenAI's early days developing Dota-playing AI, before ChatGPT was widely known. The focus should be on delivering incremental value and building mindshare.
  • Templar's strategy is to be frugal and committed, aiming to reach trillion-parameter model training capabilities. Forcing a SaaS product now would be a distraction from this core R&D mission.
    • Mark Jeffrey notes that even successful service-providing subnets like Corcel currently offer significant cost savings due to TAO subsidies, which Sam agrees with.
  • Sam reiterates that while Templar aims for competitive models within approximately six months, the project is highly R&D-intensive.

Sam Dar's Journey to Templar

  • Sam shares his diverse background: a degree in chemical engineering (which he disliked), a stint as a nightclub promoter, and passing CFA exams.
  • His career transitioned through email marketing at Salesforce, then a "journey of self-discovery" into Bitcoin and crypto. He understood the technology's transformative potential despite price volatility.
  • He worked as a technical lead for Shell's blockchain initiatives (focused on Ethereum), then as a blockchain architect at Brave (the browser), which he found frustrating as it wasn't a "blockchain company" at its core.
  • A difficult CTO role at a DeFi protocol was followed by building a startup on the Terra/Luna blockchain, which collapsed dramatically.
  • Disheartened, he applied to the OpenTensor Foundation in March of the previous year, becoming Head of Blockchain and working intensely with Jake (likely Jake Yocom-Piatt, a key figure in BitTensor).
  • His interest in BitTensor grew upon realizing the earning potential of subnet owners. He experimented with mining and conceptualized a storage subnet, but deemed storage technology "crazy hard" and more challenging than decentralized training due to strong Web2 competitors like Amazon S3.
    • Mark Jeffrey adds context from Vinnie Lingham about Filecoin offering significant cost savings (e.g., 1/10th the cost) over Amazon S3 as a competitive angle.
  • Sam eventually collaborated with "Const" (Constantine Karantonis, another prominent BitTensor developer) on an early version of Templar, which then underwent a seven-month rewrite to become the current iteration.

Concluding Thoughts

  • The discussion wraps up with Sam humorously acknowledging Michael White (another figure in the crypto space) for "snitching" about his past as a nightclub promoter.

Strategic Conclusion for Crypto AI Investors and Researchers:

This episode underscores that DTO aims to cultivate deep, protocol-level innovation within BitTensor, prioritizing long-term R&D for breakthroughs like decentralized AI training over immediate SaaS revenue. Investors and researchers should assess subnets on their ability to harness BitTensor's unique incentive mechanisms and deliver foundational technological value, rather than solely on premature monetization efforts.

Others You May Like