This episode unveils Manifold's groundbreaking Targon Virtual Machine (TVM), detailing its confidential computing capabilities and outlining a strategic pivot towards secure, verifiable AI workloads and ambitious future models like JEPA.
Manifold Q1 Updates: Brand Refresh and Product Enhancements
- Civil: Their hybrid search engine received performance improvements for faster search load times and now allows users to select different inference models, reflecting the new branding. Rob mentions further development is planned for Civil this quarter, particularly integrating TVM capabilities.
- Tao XYZ: Positioned as a "Bloomberg terminal for monitoring the Bitensor blockchain," Tao XYZ aims to consolidate various monitoring scripts (like tracking pending extrinsics, mempool activity, and tagged wallets) into a single interface. Rob emphasizes its utility for daily checks, replacing cumbersome multi-step processes.
- Tao XYZ Wallet: Addressing user dissatisfaction with existing wallets, Manifold launched the Tao XYZ wallet. It features native integration, uses polka dot.js (making it compatible with existing dApps), and notably supports both Chrome/Chromium-based browsers (like Arc) and Firefox. Rob shares positive engagement metrics for the Tao XYZ website, citing a low bounce rate (1%) and high average session duration (~10 minutes) as indicators of user satisfaction.
- Targon Redesign: Acknowledging that early Bitensor development focused more on technical backend than frontend aesthetics, Manifold launched a redesigned targon.com. This update, dropped just before the call, aims for a more polished and user-friendly interface, aligning with the overall brand refresh.
Introducing the Targon Virtual Machine (TVM): Confidential AI Compute
- Core Technology: TVM utilizes TEEs (Trusted Execution Environments) on the CPU side and Nvidia Confidential Compute on the GPU side. TEEs provide secure enclaves within the CPU where code and data are protected, even from the host operating system. Nvidia Confidential Compute extends similar hardware-level security and encryption to the GPU and the link between CPU and GPU.
- Security Rationale: Rob stresses the importance of both CPU and GPU confidentiality. He explains, "if you implement TEES on in your subnet... but don't do Nvidia confidential compute, you can do a man-in-the-middle attack in between the CPU and GPU processes to be able to intercept and receive the information." TVM's dual approach prevents this vulnerability.
- Simplified Deployment: Despite the complexity, particularly of Nvidia Confidential Compute, Manifold has focused on making the TVM node installation process straightforward.
- Use Cases & Benefits: TVM enables fully confidential AI workloads, including training and inference. This is crucial for applications requiring privacy, such as potentially charging for services on platforms like OpenRouter, where miners previously could read message content. It allows for an end-to-end confidential workflow: training, fine-tuning, and inference deployment all secured within Targon, enhancing the subnet's economic potential and value proposition.
Addressing Centralization Concerns and Future Vision for TVM
- Decentralized Validation: He counters centralization fears by highlighting a key TVM capability: "with the TVM we can now do validation on a Raspberry Pi." This dramatically lowers the barrier for validation, potentially enabling widespread, permissionless participation from individuals globally, even if the compute itself relies on high-end GPUs.
- Distinction from Heterogeneous Compute: Rob clarifies that Targon's goal isn't necessarily the "holy grail" of heterogeneous compute over residential internet (running complex AI on varied home hardware). Instead, the focus is on permissionless validation participation while leveraging enterprise-grade GPUs in data centers.
- Competitive Ambition: The ultimate aim is to compete directly with frontier AI labs (OpenAI, XAI, Anthropic) by aggregating significant GPU power within a secure, decentralized framework. "Our ambition with Targon the Targon virtual machine is to be able to head to go head-to-head... with those frontier AI labs," Rob states.
Solving the Data Distribution Challenge for Decentralized Training
- The Problem: Training across multiple data centers (e.g., Netherlands, Dallas, Japan) requires datasets to be available locally at each site, as technologies like RDMA (Remote Direct Memory Access), which allows direct memory access between hosts for high throughput and low latency, are typically confined within a single data center network.
- Targon's Solution: Manifold plans to develop capabilities for model and dataset hosting directly within the Targon ecosystem. This aims to provide secure, distributed storage solutions necessary for large-scale training efforts, reducing reliance on third-party services and addressing the data locality challenge inherent in cross-data center operations. Progress on this will be a focus for Q2.
The Evolution of Compute on Targon: From Public to Private
- Yesterday: Public compute, ranging from containerized environments to bare metal access.
- Today (with TVM): Fully private compute with SSH access and bare metal support, deployable via Kubernetes or direct SSH. This enables secure hosting of datasets and models, supporting partners like Templar (who received 16 H100s from Manifold for their work).
Navigating the Targon V5 to V6 Incentive Transition
- Initial Boost: The transition starts with a 70/30 incentive split favoring the new confidential nodes to encourage adoption, acknowledging the difficulty of setting up TE and Nvidia Confidential Compute.
- Gradual Annealing: This boosted incentive will decrease over two weeks (approx. 2.14% per day) to reach a new equilibrium, balancing the need to bootstrap confidential compute with maintaining existing services. Rob emphasizes this careful approach: "we're not trying to throw you all into the deep... we wouldn't be able to do this without y'all."
Beyond Autoregressive Models: The Case for JEPA
- Failings of Autoregressive Models: He argues that current autoregressive models (like ChatGPT), which predict tokens sequentially, often make factual/logical errors that compound, especially in long outputs. They optimize for the next token's likelihood without deeper understanding or long-term planning, essentially acting as sophisticated compression algorithms for their training data. "They don't actually know what they don't know," Rob explains.
- Reasoning Models Limitations: Even advanced "reasoning models" (like DeepSeek R1 or hypothetical OpenAI models) that explore more possibilities are still fundamentally limited by their training data; they cannot handle truly novel situations.
JEPA Explained: Learning World Models and Dependencies
- Core Idea: JEPA aims to build an internal "world model" by learning the dependencies between different inputs (X and Y), which could be text, images, or other data modalities.
- Abstract Representations: Instead of generating text directly, JEPA learns to output an abstract representation (a tensor) capturing these dependencies. This tensor can then be fed into a decoder (potentially an autoregressive one) to generate specific outputs.
- AGI ARC 2 Example: Rob uses a visual reasoning task from the AGI ARC 2 challenge (determining coloring rules based on "holes" in shapes) where current models perform poorly (~5% accuracy). Humans easily grasp the underlying dependency, which JEPA aims to replicate.
Agentic JEPA: Incorporating Cost Functions for Real-World AI
- Hierarchical Prediction & Cost: This involves stacking prediction heads operating at different time scales (e.g., predicting outcomes at 25ms, 50ms, 100ms). Crucially, it incorporates a cost function alongside the reward function common in reinforcement learning.
- Optimizing for Survival: The AI learns to maximize immediate rewards (next action) while simultaneously minimizing long-term costs (potential negative consequences). "Humans have to optimize not just for maximizing the reward but for minimizing the cost in the future," Rob analogizes, comparing it to how animals navigate the world to avoid harm.
- Embodied AI & ASI Potential: This architecture is proposed as the path towards embodied AI (robots that understand and interact with the physical world) and potentially Artificial Super Intelligence (ASI), capable of complex, multi-step planning and execution while considering constraints and risks. TVM's confidential compute ensures the value generated stays within the Bitensor/Targon ecosystem.
Live Demos: Tao XYZ, Targon.com, and Civil.com
- Tao XYZ: Showcases the "Bloomberg feel" with minimal scrolling, navigating subnets (using Subnet 18/Zeus as an example), viewing delegation tables, identifying MEV bots ("sandwiches"), tracking large transactions, and viewing wallet details. He highlights real-time updates and upcoming features like better labeling, improved parameter visibility, and faster load times (targeting 6ms). Analytics and news tabs are also shown.
- Targon.com: Displays the new sleek redesign, emphasizing the unified design language across Manifold products. He walks through the playground, model browsing, and parameter adjustments. Rob reiterates the focus isn't on reselling raw compute due to inefficient market dynamics (multiple brokers taking cuts), but on value-added services like AI training and secure inference.
- Civil.com: Shows the redesigned interface, ability to switch models (R1, V3), and faster search results. He notes it's currently running on a separate test cluster but will be moved back to the main Targon infrastructure.
- Tao XYZ Wallet: Not demoed live due to privacy concerns, but confirmed availability on Chrome and Firefox extension stores.
Deep Dive Q&A: TVM Mechanics and Implications
- Validation: Confirmed that validators check cryptographic proofs generated by the TVM, enabling lightweight validation (e.g., on a Raspberry Pi). Validators still need stake and can assign arbitrary computation jobs to miners running TVM.
- Continuity: Targon Hub (inference service) remains operational; TVM changes the backend validation and compute environment without disrupting existing inference capabilities.
- Training Coordination: TVM allows Manifold (as subnet owner/major validator) to coordinate miners' machines for large-scale training runs.
- Confidential Storage: Rob clarifies that for cross-data center training, datasets need to reside in each location. The planned Targon storage solution will be fully encrypted and confidential, designed to handle terabyte-scale AI data without needing ZK proofs, solving a key logistical challenge for distributed training. Model weights or intermediate steps, not necessarily the full dataset, are transferred during training, but data redundancy across sites is crucial.
Minor Innovation in a Confidential Compute Environment
- Shift to Hardware Optimization: Josh (Manifold CTO) explains that while TVM restricts what software runs (it must be within the verified confidential environment), it doesn't restrict the underlying hardware beyond compatibility with confidential compute (specific CPU/GPU features). Innovation shifts to hardware optimization: "how fast is your storage? How fast is your latency between your nodes?" Miners can compete by building superior, high-performance clusters (e.g., multiple 8x H200 nodes with Infiniband, shared storage), and the network can incentivize these verifiable hardware characteristics.
- Initial Challenge: Rob adds that simply setting up and scaling nodes that pass attestation for confidential compute is initially a significant technical challenge worthy of incentive. Future competitions on top of confidential compute might be reintroduced later.
Confidential Compute: The Key to Enterprise Adoption and Monetization
- Trust vs. Verification: Jake notes that confidential computing addresses enterprise concerns about data privacy, moving beyond trusting providers like OpenAI to verifying data security cryptographically. "Don't trust verify," Josh adds.
- User Demand: Josh emphasizes that customer feedback prioritized security, privacy, and consistency (e.g., reliable rerolls respecting parameters) over raw speed. TVM directly addresses these demands.
- Monetization Strategy:
- Paid OpenRouter: Confidential compute enables charging for private inference on OpenRouter (planned for the following week).
- Exclusive Models: Training proprietary models (starting with 32B, then ~150-200B, then ~800B autoregressive models, later JEPA) entirely within Targon's confidential environment creates exclusive offerings accessible only via Targon, capturing significant value. Rob uses a Coke/Pepsi analogy: instead of just bottling others' "flavors," Targon will create and sell its own.
Security Considerations in the TVM Ecosystem
- Man-in-the-Middle (MitM): An MitM attack is possible between OpenRouter and the Targon Hub API (standard HTTPS risk), but not between Targon Hub and the miners (secured by TVM). Direct connections to Targon.com would offer stronger end-to-end assurance. OpenRouter has flags to prevent training on user data.
- Proving Hub Privacy: Jake asks how users can verify Targon Hub itself isn't saving data. Rob acknowledges this is a valid point needing a cryptographic proof solution, aligning with the "don't trust, verify" ethos.
- Delegated Stake Risk: Rob expresses concern that TVM's success could make Targon so valuable that large entities might buy up stake ($ALPHA) to control subnet bandwidth, potentially centralizing access. He views the original delegated stake model as a potential vulnerability in this high-value scenario.
JEPA's Long-Term Vision: Towards Artificial Super Intelligence
- Beyond Inference: He envisions using a future JEPA-based ASI for complex real-world tasks, like autonomously setting up manufacturing processes (e.g., steel plants, electronics production) in response to macro trends (like US reshoring incentives). "If I have a super intelligent AI I can just tell it manufacture me everything and it will. That is what we're doing on Targon."
- Timeline: He commits to progress reports each quarter, aiming to catch up to frontier labs this quarter (Q2) and surpass them with JEPA in Q3.
TVM's Broader Potential Across the Bitensor Ecosystem
- Validator Security: Jake suggests TVM could secure validators on other subnets, mitigating key leakage risks – Rob agrees this is possible low-hanging fruit.
Technical Q&A: Accessing and Utilizing TVM Compute
- Access: An SDK for interacting with TVM compute will be released soon (1-2 weeks), allowing users to compile their workloads into a CVM (Confidential Virtual Machine) image. Inference is accessible now via targon.com.
- CVM Explained: Josh clarifies a CVM is a full, encrypted OS-level virtual machine running on bare metal, distinct from lighter-weight Docker containers. Users package their application (like a Dockerfile process) into a CVM.
- SSH Access: Users can technically SSH into a CVM they deploy if they pre-bake their SSH keys into the CVM image. Validators can assign specific CVMs (potentially with user keys) to miners. Miners themselves cannot access the CVM ("firewalled off").
- Validation Process: Validators request an attestation report from the CVM. This report is cryptographically signed using keys fused into the CPU/GPU hardware, providing verifiable proof of the hardware specs (GPU type, CPU, RAM, etc.) and that the CVM is running securely, without needing SSH access or interrupting user sessions.
The State-of-the-Art Nature of TVM and Industry Collaboration
- Nvidia Collaboration: Manifold worked directly with top-level Nvidia engineers to develop TVM.
- Cutting-Edge: Rob states TVM is state-of-the-art technology, likely one of the first large-scale deployments of this type of confidential compute. They are actively working with data centers (like Google, Digital Ocean) to enable the necessary hardware features, as demand for this capability is new. "This is why TVM is so big. This is why we said think bigger," Rob exclaims.
Reflecting on the Verification vs. Innovation Trade-off
- Reduced Verification Complexity: Josh notes that verifying complex software-level optimizations (like raw inference outputs in V5) was becoming intractable and consuming development time better spent on user-requested features like security. TVM simplifies verification by focusing on hardware attestation and secure execution environments.
- Strategic Pivot: TVM allows Manifold to move away from chasing speed optimizations ("TPS for the sake of TPS") towards providing the security, privacy, and reliability demanded by users and necessary for enterprise adoption and advanced AI development.
Conclusion
Manifold's TVM launch marks a significant strategic shift, prioritizing verifiable, confidential compute to unlock enterprise adoption and enable ambitious AI training, including the JEPA architecture. Investors and researchers should monitor TVM adoption, the development of exclusive Targon-trained models, and the potential for this technology to reshape compute paradigms across Bitensor.