This episode reveals how Targon is building a secure, enterprise-grade compute marketplace on Bittensor, using trusted execution environments to solve the fundamental trust problem in decentralized AI.
Six Months of Trusted Execution Environments
- Rob from Targon begins by recapping their journey since implementing Trusted Execution Environments (TEEs), a secure hardware feature that isolates code and data during processing. This move marked a significant pivot for their Bittensor subnet, enabling them to offer stable and secure compute by running workloads in confidential virtual machines (CVMs).
- This shift allowed Targon to move from a speed-based competition to one centered on price and stability, addressing a core challenge for enterprise adoption.
- Rob contrasts this iterative capability with the rigidity of early DePIN projects like Golem, which were constrained by less flexible smart contract architectures. He highlights Bittensor's strength in enabling constant evolution.
- The TEE implementation provides verifiable security, with attestation endpoints from Intel, AMD, and Nvidia confirming the integrity of the compute environment. Attestation is a cryptographic process where the hardware proves it is genuine and running specific, authorized code.
Rob emphasizes the importance of this security layer: "If you could break if one the host machine could break into the VM and see what the workload that was being run on there, you know, that's just not something that is really marketable or palatable to most customers."
Technical Deep Dive: Hardware Support and Security
- Josh, providing technical expertise, details the hardware compatibility and security mechanisms underpinning Targon's platform. The conversation underscores a deep collaboration with hardware manufacturers to push the boundaries of confidential computing.
- CPU Support: The platform supports fully encrypted workloads on recent-generation Intel and AMD CPUs using technologies like Intel TDX (Trust Domain Extensions), which creates hardware-isolated virtual machines.
- GPU Support: TEEs are supported on Nvidia's Hopper series GPUs (like the H200), with support for the newer Blackwell architecture expected soon.
- Strategic Insight: Targon's close collaboration with Intel is influencing future hardware design. Josh reveals that Intel is considering adding hardware-level features requested by the Targon team, demonstrating how decentralized networks can drive innovation in the broader tech stack.
Product Launch: A Self-Serve Compute Platform
- Targon announces the launch of its fully self-serve platform, making its secure compute resources accessible to a broader audience beyond its initial enterprise clients. This marks a major step toward creating a permissionless, user-friendly cloud alternative.
- Container Rentals: Users can now rent containers with 1, 2, 4, or 8 H200 GPUs, similar to platforms like RunPod. The platform also offers a large inventory of CPU servers.
- Serverless SDK: A new SDK allows developers to deploy Python functions, VLM instances, and web servers, enabling rapid experimentation and A/B testing of AI models.
- Upcoming Features:
- GPU Virtualization: Targon will soon allow users to fractionalize GPUs (e.g., use 24GB of an H200 to get Hopper performance at a 4090 price) or combine multiple GPUs into a single virtual GPU for large model training.
- RDMA Clusters: Support for Remote Direct Memory Access (RDMA), a high-speed networking technology crucial for large-scale distributed training, is coming in a few weeks.
- Network Volumes: Persistent storage volumes that can be attached to multiple containers simultaneously will simplify data and model management.
Enterprise Traction and The Vision for a Compute Marketplace
- Rob outlines Targon's enterprise success and its ambitious long-term vision to create a transparent, liquid market for compute, complete with financial derivatives.
- The company has signed its first 12-month enterprise contract and is running active proof-of-concepts with Web3 and AI companies. They are also offering up to $100,000 in credits to attract new enterprise customers.
- A key innovation is an order book for interruptible compute—resources that can be reclaimed by the provider at any time. Targon's orchestration software provides customers with an uninterruptible experience at a lower, interruptible price.
- Strategic Implication: Targon aims to solve the price opacity in the global compute market by creating transparent spot and futures markets (planned for Q1 2026). This financialization of compute, including derivatives and stablecoins, represents a multi-trillion dollar addressable market.
How TEE Attestation Actually Works
- In a detailed explanation, Josh demystifies the mechanics of TEE attestation, revealing how Targon verifies the integrity of its decentralized network.
- The process relies on a secret key burned directly into the CPU hardware, which is inaccessible even to the machine's owner.
- When a virtual machine boots, it must pass an attestation check that generates a signed report. This report includes metadata about the running process, the VM's configuration, and its IP address.
- This mechanism cryptographically proves that the correct, secure VM is running on a specific machine and prevents relay attacks or tampering. For GPUs, Targon combines this CPU-level attestation with Nvidia's own verification checks for a multi-layered security model.
Josh explains the core principle: "Once you've proved for sure that the VM is secure... you can trust anything that is going into the user data of the attestation report."
The Evolution of Targon and Future of Privacy
- Rob recounts the history of Subnet 4, from its origins as "Sybil" (an inference-focused subnet) to its current form. This journey was driven by the realization that verifiable privacy was the most critical problem to solve.
- Early attempts at verifying LLM inference were plagued by non-determinism, where different hardware produced slightly different outputs, making verification nearly impossible.
- The pivot to TEEs was a direct response to the enterprise need for privacy and security, a feature that traditional data centers offer but was missing in the decentralized world.
- Future Trend: The platform is positioned to offer a truly private AI experience, where users can interact with models in a fully encrypted, end-to-end environment. This directly addresses privacy concerns raised by consumers and public figures like Matthew McConaughey regarding cloud-based AI.
Connecting Revenue to the Bittensor Ecosystem
- Addressing a critical question for investors, Rob discusses Targon's strategy for linking its fiat-based enterprise revenue back to the Bittensor network and its tokenomics.
- The long-term goal is to build a system where any entity with enough stake on the subnet can deploy their own CVMs, creating a truly decentralized and permissionless compute market.
- While payments in TAO are straightforward, converting fiat revenue into on-chain value involves navigating complex tax and legal hurdles, especially for a US-based team.
- Targon is actively developing a legally compliant and automated solution to handle fiat-to-crypto conversions and value accrual, aiming to set a precedent for other subnets building on-chain businesses.
Conclusion
This discussion highlights Targon's transition from a conceptual subnet to a product-driven company delivering a secure, enterprise-ready compute solution. For investors and researchers, the key takeaway is that verifiable security via TEEs is becoming the critical differentiator in the decentralized compute space, unlocking enterprise adoption and new financial markets.