The Opentensor Foundation | Bittensor TAO
October 10, 2025

Bittensor Novelty Search :: SN4 Targon :: Confidential Cloud Compute

The Targon team unveils their self-serve confidential cloud compute platform, breaking down the hardware-level security and market-making mechanics that allow them to offer a decentralized alternative to AWS that’s not just cheaper, but verifiably more private.

Forging Trust in an Untrusted World

  • "If the host machine could break into the VM and see the workload being run, that's just not something that is marketable or palatable to most customers."
  • "Targon was forged in this network where the miners are being paid to break your stuff. We haven't had an exploit in God knows how long... We're very calm now."

Targon’s core innovation is its use of Trusted Execution Environments (TEEs), a hardware-based security feature in modern Intel, AMD, and Nvidia chips. This technology embeds a cryptographic key directly into the silicon, allowing Targon to create confidential virtual machines (CVMs). These CVMs can provide a cryptographic “attestation” report, proving that a specific, untampered workload is running securely on a machine, a feat previously impossible on untrusted, anonymous hardware. This battle-tested security, hardened by Bittensor’s adversarial environment, provides the verifiable privacy needed to win over enterprise clients.

The Self-Serve Confidential Cloud

  • "We're really excited to be announcing and launching our fully self-served platform for container rentals... you can access the container rentals in the exact same way as if you've ever used RunPod before."

Targon has officially launched its platform at Targon.com, offering on-demand rentals for both CPU servers and high-end GPUs like the H200. The platform is designed for a one-click user experience, aiming to be far simpler than legacy cloud providers. Alongside server rentals, Targon released a serverless SDK for deploying Python functions and web services, creating a comprehensive cloud suite. Upcoming features include GPU virtualization—offering the power of an H200 for the price of a 4090—and RDMA clusters for large-scale AI training.

Price Discovery for Pixels

  • "The compute market globally is highly inefficient... one of the things that we want to do is sort that out to where you can get to know what the interruptible price is, what the three-month price is."

Targon is tackling the opaque and inefficient global compute market head-on. By creating a transparent order book for interruptible compute, they enable true price discovery. This model allows data centers to monetize idle hardware that would otherwise generate zero revenue. The long-term vision is to build out futures markets (spot, 1-month, 12-month) and financial derivatives around compute, transforming it into a liquid, tradable commodity and a trillion-dollar addressable market.

Key Takeaways:

  • Targon is using hardware-level security to build a decentralized cloud that is both cheaper and more private than centralized incumbents. This approach, forged in Bittensor's competitive crucible, is now accessible via a self-serve platform designed to create a transparent, liquid market for global compute.
  • Trust is the New Commodity. Targon’s use of TEEs shifts security from a software promise to a cryptographic hardware guarantee. This verifiable privacy is the key to unlocking enterprise adoption for decentralized AI.
  • The Crucible Creates Diamonds. Bittensor's adversarial environment forced Targon to build an unexploitable system. This has turned a historical pain point ("PTSD from miners") into a core competitive advantage, resulting in a uniquely resilient platform.
  • From Backroom Deals to a Liquid Market. By launching a self-serve platform with a transparent order book, Targon is attacking the compute market's core inefficiency: opaque pricing. Their vision extends to compute derivatives, aiming to turn compute power into a globally tradable asset.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals how Targon is launching an enterprise-grade confidential cloud platform on Bittensor, leveraging trusted execution environments to solve the critical security and privacy challenges in decentralized AI.

Introduction: Six Months of Confidential Compute

  • Rob from Targon kicks off by recapping the project's progress since their last update, highlighting six months of live operations with Trusted Execution Environments (TEEs). A TEE is a secure, isolated area within a processor that protects code and data from being accessed or modified by the host system, ensuring confidentiality. This capability was a game-changer, allowing Targon to offer stable, secure virtual machines by enabling direct SSH access for workload management.
  • Rob contrasts Bittensor's iterative nature with the rigidity of smart contract-based platforms like Golem, emphasizing Bittensor's strength in allowing constant evolution.
  • The subnet has successfully transitioned from a speed-based competition to a price-based one, enhancing stability for enterprise customers.
  • Security is paramount, with workloads protected not only from Targon but also verifiable by third parties like Intel, AMD, and Nvidia through their attestation endpoints.

Technical Breakthroughs and Hardware Support

  • The discussion details the technical advancements that underpin Targon's platform. The team has achieved fully encrypted support for both Intel (TDX) and AMD (SEV) CPUs, significantly broadening the range of compatible hardware. This allows them to support any recent-generation CPU server.
  • Josh, providing technical depth, explains that on the GPU side, support is currently focused on Nvidia's Hopper series, with Blackwell compatibility expected in one to two months.
  • Rob emphasizes the non-negotiable nature of security for enterprise adoption. He states, "If you could break if the host machine could break into the VM and see what the workload that was being run on there, you know that's just not something that is really marketable or palatable to most customers."
  • Targon is working closely with Intel to shape next-generation TDX features. Josh reveals that Intel is incorporating hardware-level functionality requested by the Targon team, demonstrating a deep, collaborative partnership.

Product Launch: Self-Serve Platform and Serverless SDK

  • Targon announces the launch of its fully self-serve platform for container and CPU server rentals, accessible at targon.com. The user experience is designed to be familiar to users of platforms like RunPod, allowing anyone to deploy workloads without direct interaction with the Targon team.
  • GPU Offerings: The platform currently supports H200s, with plans to introduce GPU virtualization. This will allow users to combine multiple GPUs for larger VRAM (e.g., two H200s for 282 GB of VRAM) or fractionalize a single GPU to get H200 performance at a lower price point.
  • CPU Offerings: A large inventory of virtualized CPU servers is available, supporting Docker with BusyBox and Sysbox to provide a VM-like experience within containers.
  • Serverless SDK: Targon is also releasing a serverless SDK for deploying Python functions, VLM instances, web servers, and web scrapers. This creates a comprehensive cloud solution for customers to build entire pipelines, from A/B testing models to fine-tuning and deployment.

Upcoming Features: RDMA Clusters and Network Volumes

  • RDMA Clusters: Support for RDMA (Remote Direct Memory Access) will be available for pre-training and large-scale fine-tuning. RDMA enables direct memory access between servers, bypassing the operating system to achieve high-throughput, low-latency networking crucial for distributed training.
  • Network Volumes: Multi-node network volumes will be supported across both the rental and serverless platforms. This allows users to connect datasets and model weights to multiple containers simultaneously, ensuring data persistence and ease of use.

Enterprise Traction and Strategic Partnerships

  • Rob highlights Targon's growing enterprise adoption and key partnerships that strengthen its market position.
  • The team has signed its first 12-month enterprise contract and has a strong pipeline of Web3 and Frontier AI companies in active proof-of-concepts.
  • To accelerate adoption, Targon is offering up to $100,000 in credits to eligible enterprise customers.
  • A key strategic partner is Dippy, which processes millions of requests daily on Targon for its 8 million users.
  • Targon has also partnered with major data centers in the United States and Latin America, securing access to thousands of GPUs with room for 10x expansion and achieving gross margins between 40% and 90%.

The Vision for a Transparent Compute Market

  • A core part of Targon's strategy is to create a transparent, efficient market for compute. They have developed an order book for interruptible compute, matching sellers with their incentive mechanism to provide customers with an uninterruptible experience at a lower price.
  • Rob draws a parallel between the opaque pricing in the US healthcare system and the current compute market, arguing that a lack of transparency prevents efficiency.
  • The long-term vision includes futures markets (spot, 1-month, 3-month, 12-month) and financial derivatives built on top of compute, which they see as a potential trillion-dollar market.
  • This permissionless financial layer aims to unlock new financing models for data centers, allowing them to monetize idle hardware on the interruptible market.

Const's User Experience and the TEE Revolution

  • Const, the podcast host, shares his firsthand experience using Targon's new self-serve platform. He needed CPU servers and found Targon's offering to be simpler, faster, and more powerful than AWS EC2, highlighting the superior user experience of the decentralized solution.
  • Const notes the revolutionary nature of TEEs for the decentralized physical infrastructure (DePIN) space, recounting a conversation with a Golem representative who didn't believe the technology was possible.
  • He sees TEEs as a foundational layer for Bittensor, enabling subnets to offer monetizable, private, and verifiable services to enterprise customers. He states, "All of a sudden you become...able to give assurance to...Fortune 500 companies that no other team in the world [can]."

The Technical Mechanics of Attestation and Security

  • Josh provides a detailed explanation of how TEE attestation works. Attestation is a cryptographic process where a chip uses a secret key, burned into the hardware itself, to sign a report containing metadata about the running process. This allows Targon to verify that the correct, secure VM is running on a specific machine.
  • The process verifies the VM's startup flags, GPU configuration (via Nvidia SMI and NVCC checks), and other critical metadata.
  • To prevent relay attacks, a VM is hard-locked to its IP address upon its initial boot-up attestation. If the VM is moved or its location is misrepresented, it will fail verification and refuse to run.
  • Josh explains that this hardware-level trust is essential for a network of anonymous and untrusted compute providers, as purely software-based solutions are fundamentally vulnerable.

Targon's Competitive Edge and Evolution

  • The conversation explores Targon's journey and what gives it a competitive advantage. Rob recounts the evolution of Subnet 4 from its origins as "Sibyl," an inference and search mechanism, to its current focus on confidential compute.
  • The initial challenge of verifying LLM inference non-determinism and ensuring user privacy led them to pivot to TEEs in November 2023.
  • Rob argues that Targon's experience fending off malicious miners has forged a battle-hardened, security-obsessed culture. This allows them to focus on customer experience rather than constantly patching exploits.
  • The team's software expertise enables them to offer superior products, such as virtualized GPUs that provide the performance of an H200 (including the Transformer Engine) at the price point of a 4090 by limiting VRAM.

Connecting Revenue to the Bittensor Ecosystem

  • Const raises a critical question about how fiat-based revenue from enterprise contracts will flow back to the Bittensor network and token holders.
  • Rob explains that while customers paying in TAO makes buybacks straightforward, fiat payments introduce significant tax and legal complexities for a US-based team.
  • The team is actively working with legal counsel to develop a compliant and automated solution for converting fiat revenue to on-chain value, aiming to create a legal precedent for other on-chain businesses.
  • Targon's ultimate goal is to build a robust, permissionless compute market that benefits the entire ecosystem, including other subnets, by providing a foundational service for secure, low-cost compute.

Conclusion

This episode demonstrates that verifiable, confidential compute is a foundational primitive for decentralized AI. Targon's launch of a self-serve TEE-powered platform positions it as a critical infrastructure layer, creating immediate opportunities for enterprise adoption and providing a secure, efficient compute marketplace for the entire Bittensor network.

Others You May Like