taostats
October 10, 2025

Novelty Search october 9, 2025

The Targon team joins the show to unveil their new self-serve compute platform, a major leap built on battle-tested security. This is a deep dive into how they’re turning the hostile, permissionless environment of BitTensor into a strategic advantage to build a product that rivals—and in some ways surpasses—AWS.

The Targon Platform Is Live

  • "We're really excited to be announcing and launching our fully self-served platform for container rentals... You can go to targon.com right now and check this out. It's all self-served; you don't have to interact with us at all."
  • Self-Serve Compute: Users can now instantly rent CPU and GPU servers (currently H200s) directly from the Targon website, with support for Docker and a VM-like experience inside containers.
  • Serverless SDK: A new SDK allows developers to deploy Python functions, run VLM instances, or host web scrapers, creating a comprehensive cloud solution for iterating on AI models.
  • Enterprise-Grade Features Coming Soon: The roadmap includes RDMA clusters and network volumes for large-scale pre-training and fine-tuning, directly addressing the needs of frontier AI companies.

The Un-Gameable Game: Trusted Execution Environments (TEEs)

  • "When the virtual machine's online, you can call this attestation process which says, 'Hey, sign this with the secret data that is burned into the chip.' That report includes metadata about the process that's running, so we can make sure it's the exact VM we expect."
  • Hardware-Level Verification: TEEs use secret keys baked directly into Intel and NVIDIA chips to create encrypted VMs. These VMs are so secure that even the hardware owner cannot access the data inside.
  • Collaboration with Intel: The team is working so closely with Intel that their feedback is helping steer the direction of hardware-level features in future chips, bridging the gap between decentralized network needs and silicon-level capabilities.

Building a Bloomberg Terminal for GPUs

  • "The compute market globally is highly inefficient... we want to sort that out to where you can know what the interruptible price is, what the 3-month to 6-month prices are. This is something we think is a huge, over a trillion-dollar market that can be addressed."
  • Transparent Order Book: The platform creates a real-time, transparent order book for interruptible compute, with plans to launch futures markets (1, 3, 12 months) in Q1 2026.
  • Financial Derivatives: The long-term vision is to build financial derivatives and stablecoins on top of this compute market, unlocking enormous financial potential and providing financing options for data centers.

Key Takeaways:

  • Targon is leveraging the inherently adversarial environment of a permissionless network to forge a product with security guarantees that centralized providers can't easily match. This "trial-by-fire" approach has forced a level of innovation that turns a perceived weakness into a powerful moat. By building the foundational compute layer for BitTensor, they position themselves as a critical enabler for the entire ecosystem.
  • Security Through Adversity: Targon’s "PTSD" from battling malicious miners forced them to build a cryptographically secure compute layer using TEEs, making their platform more resilient than siloed, trusted alternatives.
  • DeFi Meets DePIN: They are building a transparent financial market for compute, complete with order books and derivatives. The goal isn’t just to rent GPUs; it’s to create the pricing infrastructure for the entire compute economy.
  • The Foundational Layer: Targon is providing a verifiable, secure, and cost-effective compute service that other BitTensor subnets can build upon, potentially supercharging the entire network’s growth and competitive advantage.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals how Targon is building a secure, enterprise-grade compute marketplace on Bittensor, using trusted execution environments to solve the fundamental trust problem in decentralized AI.

Six Months of Trusted Execution Environments

  • Rob from Targon begins by recapping their journey since implementing Trusted Execution Environments (TEEs), a secure hardware feature that isolates code and data during processing. This move marked a significant pivot for their Bittensor subnet, enabling them to offer stable and secure compute by running workloads in confidential virtual machines (CVMs).
  • This shift allowed Targon to move from a speed-based competition to one centered on price and stability, addressing a core challenge for enterprise adoption.
  • Rob contrasts this iterative capability with the rigidity of early DePIN projects like Golem, which were constrained by less flexible smart contract architectures. He highlights Bittensor's strength in enabling constant evolution.
  • The TEE implementation provides verifiable security, with attestation endpoints from Intel, AMD, and Nvidia confirming the integrity of the compute environment. Attestation is a cryptographic process where the hardware proves it is genuine and running specific, authorized code.

Rob emphasizes the importance of this security layer: "If you could break if one the host machine could break into the VM and see what the workload that was being run on there, you know, that's just not something that is really marketable or palatable to most customers."

Technical Deep Dive: Hardware Support and Security

  • Josh, providing technical expertise, details the hardware compatibility and security mechanisms underpinning Targon's platform. The conversation underscores a deep collaboration with hardware manufacturers to push the boundaries of confidential computing.
  • CPU Support: The platform supports fully encrypted workloads on recent-generation Intel and AMD CPUs using technologies like Intel TDX (Trust Domain Extensions), which creates hardware-isolated virtual machines.
  • GPU Support: TEEs are supported on Nvidia's Hopper series GPUs (like the H200), with support for the newer Blackwell architecture expected soon.
  • Strategic Insight: Targon's close collaboration with Intel is influencing future hardware design. Josh reveals that Intel is considering adding hardware-level features requested by the Targon team, demonstrating how decentralized networks can drive innovation in the broader tech stack.

Product Launch: A Self-Serve Compute Platform

  • Targon announces the launch of its fully self-serve platform, making its secure compute resources accessible to a broader audience beyond its initial enterprise clients. This marks a major step toward creating a permissionless, user-friendly cloud alternative.
  • Container Rentals: Users can now rent containers with 1, 2, 4, or 8 H200 GPUs, similar to platforms like RunPod. The platform also offers a large inventory of CPU servers.
  • Serverless SDK: A new SDK allows developers to deploy Python functions, VLM instances, and web servers, enabling rapid experimentation and A/B testing of AI models.
  • Upcoming Features:
    • GPU Virtualization: Targon will soon allow users to fractionalize GPUs (e.g., use 24GB of an H200 to get Hopper performance at a 4090 price) or combine multiple GPUs into a single virtual GPU for large model training.
    • RDMA Clusters: Support for Remote Direct Memory Access (RDMA), a high-speed networking technology crucial for large-scale distributed training, is coming in a few weeks.
    • Network Volumes: Persistent storage volumes that can be attached to multiple containers simultaneously will simplify data and model management.

Enterprise Traction and The Vision for a Compute Marketplace

  • Rob outlines Targon's enterprise success and its ambitious long-term vision to create a transparent, liquid market for compute, complete with financial derivatives.
  • The company has signed its first 12-month enterprise contract and is running active proof-of-concepts with Web3 and AI companies. They are also offering up to $100,000 in credits to attract new enterprise customers.
  • A key innovation is an order book for interruptible compute—resources that can be reclaimed by the provider at any time. Targon's orchestration software provides customers with an uninterruptible experience at a lower, interruptible price.
  • Strategic Implication: Targon aims to solve the price opacity in the global compute market by creating transparent spot and futures markets (planned for Q1 2026). This financialization of compute, including derivatives and stablecoins, represents a multi-trillion dollar addressable market.

How TEE Attestation Actually Works

  • In a detailed explanation, Josh demystifies the mechanics of TEE attestation, revealing how Targon verifies the integrity of its decentralized network.
  • The process relies on a secret key burned directly into the CPU hardware, which is inaccessible even to the machine's owner.
  • When a virtual machine boots, it must pass an attestation check that generates a signed report. This report includes metadata about the running process, the VM's configuration, and its IP address.
  • This mechanism cryptographically proves that the correct, secure VM is running on a specific machine and prevents relay attacks or tampering. For GPUs, Targon combines this CPU-level attestation with Nvidia's own verification checks for a multi-layered security model.

Josh explains the core principle: "Once you've proved for sure that the VM is secure... you can trust anything that is going into the user data of the attestation report."

The Evolution of Targon and Future of Privacy

  • Rob recounts the history of Subnet 4, from its origins as "Sybil" (an inference-focused subnet) to its current form. This journey was driven by the realization that verifiable privacy was the most critical problem to solve.
  • Early attempts at verifying LLM inference were plagued by non-determinism, where different hardware produced slightly different outputs, making verification nearly impossible.
  • The pivot to TEEs was a direct response to the enterprise need for privacy and security, a feature that traditional data centers offer but was missing in the decentralized world.
  • Future Trend: The platform is positioned to offer a truly private AI experience, where users can interact with models in a fully encrypted, end-to-end environment. This directly addresses privacy concerns raised by consumers and public figures like Matthew McConaughey regarding cloud-based AI.

Connecting Revenue to the Bittensor Ecosystem

  • Addressing a critical question for investors, Rob discusses Targon's strategy for linking its fiat-based enterprise revenue back to the Bittensor network and its tokenomics.
  • The long-term goal is to build a system where any entity with enough stake on the subnet can deploy their own CVMs, creating a truly decentralized and permissionless compute market.
  • While payments in TAO are straightforward, converting fiat revenue into on-chain value involves navigating complex tax and legal hurdles, especially for a US-based team.
  • Targon is actively developing a legally compliant and automated solution to handle fiat-to-crypto conversions and value accrual, aiming to set a precedent for other subnets building on-chain businesses.

Conclusion

This discussion highlights Targon's transition from a conceptual subnet to a product-driven company delivering a secure, enterprise-ready compute solution. For investors and researchers, the key takeaway is that verifiable security via TEEs is becoming the critical differentiator in the decentralized compute space, unlocking enterprise adoption and new financial markets.

Others You May Like