This episode unveils Manifold's Targon Virtual Machine (TVM), a landmark upgrade introducing verifiable confidential compute to BitTensor's Subnet 4, aiming to rival frontier AI labs and reshape Crypto AI infrastructure through enhanced security and novel AI architectures like JEPA.
Manifold Q1 Updates & Brand Refresh
- Manifold kicks off by highlighting a productive quarter, marked by a significant brand redesign aimed at unifying their product suite and enhancing user experience.
- Rob, representing Manifold, emphasizes the positive community reception to the new visual identity, setting the stage for major technical announcements.
Sybil Search Engine Enhancements
- Key improvements include:
- Model Selection: Users can now choose the underlying AI model for inferencing directly within the interface, enabled by the new brand update.
- Performance Boost: Significant work has been done to reduce search load times, making the engine much quicker.
- Future Development: Rob notes that Sybil will see continued development, particularly integrating capabilities stemming from the new Targon Virtual Machine (TVM), which will be detailed later.
Tao.xyz: The Bloomberg Terminal for BitTensor
- Manifold introduces Tao.xyz, designed as a comprehensive monitoring tool for the BitTensor blockchain, aiming to replace disparate scripts and monitoring methods.
- Centralized Monitoring: The goal is to provide a single dashboard for tracking key network activities like pending transactions (extrinsics), the mempool (a pool of pending transactions), wallet tagging, and subnet-specific data (like the
btcli stake list
). Rob mentions his personal use case: checking network status easily upon waking up.
- User Engagement: Early metrics show high engagement, with a low 1% bounce rate and an average session duration around 10 minutes, indicating strong user adoption and utility.
Tao.xyz Wallet Launch
- Addressing user dissatisfaction with existing options, Manifold launched the Tao.xyz wallet.
- Native Integration: Built with native BitTensor functionality in mind, using Polkadot.js, ensuring compatibility wherever Polkadot.js wallets are accepted. Polkadot.js is a collection of tools, interfaces, and libraries enabling interaction with Polkadot-ecosystem blockchains like BitTensor.
- Cross-Browser Support: Notably, the wallet extension works on both Chrome (and Chromium-based browsers like Arc) and Firefox, broadening accessibility.
Targon.com Redesign
- Manifold acknowledges their initial focus was more technical than aesthetic, leading to a redesign of Targon.com, their AI inference platform on BitTensor.
- Improved UI/UX: The redesign, launched just before the recording, offers a more polished and user-friendly interface, aligning with the overall brand refresh and focus on usability. Rob expresses pride in the team's progress on delivering visually appealing and functional web applications.
The Major Announcement: Targon Virtual Machine (TVM)
- The centrepiece of the update is the Targon Virtual Machine (TVM), presented as a secure, confidential computing platform specifically designed for AI workloads on BitTensor's Subnet 4.
- Confidential Computing: TVM utilizes both TEEs (Trusted Execution Environments) – secure, isolated areas within a CPU – and Nvidia Confidential Compute (securing data processed on the GPU) to ensure end-to-end privacy and integrity for AI tasks.
- Enhanced Security: Rob highlights a critical security aspect: using TEEs alone without Nvidia Confidential Compute leaves a vulnerability. "You can do a man-in-the-middle attack in between the CPU and GPU processes to be able to intercept and receive the information," Rob explains, stressing the necessity of securing the entire compute pathway.
- Simplified Deployment: Despite the complexity, the installation process for miners setting up TVM-compatible nodes has been streamlined.
- New Capabilities: TVM enables fully confidential AI workflows, including training, fine-tuning, and inference, entirely within the Targon environment. This allows for secure processing of sensitive data and the creation of proprietary models accessible only via Targon.
- Monetization Potential: This enhanced security unlocks possibilities like offering paid, private inference tiers on platforms like OpenRouter, where previously miners could potentially read user prompts and outputs.
Addressing Centralization Concerns & Vision for Decentralized Validation
- Rob directly addresses community concerns that TVM might lead to centralization on Subnet 4.
- Counter-Argument: He argues that TVM is no more centralized than other subnets with "opinionated design decisions" (like fixed UID slots). He emphasizes that these decisions are necessary starting points.
- Path to Decentralization: The long-term vision enabled by TVM includes radically decentralized validation. "With the TVM we can now do validation on a Raspberry Pi," Rob states, envisioning a future where individuals worldwide can participate in securing the network with low-cost hardware, separating validation from the high-end compute required for AI tasks.
Competing with Frontier AI Labs & The Role of Storage
- Manifold positions TVM as a strategic move to compete directly with leading AI labs like OpenAI, XAI, and Anthropic by aggregating enterprise-grade GPU resources.
- Challenge of Scale: Rob acknowledges the massive GPU clusters (50k-100k+) used by frontier labs and the difficulty of competing without architectural advantages or significant scale.
- TVM as an Enabler: TVM aims to provide the infrastructure to pool high-performance compute (like H100s, H200s) necessary for large-scale model training within the BitTensor ecosystem.
- Data Distribution Challenge: A key hurdle identified in discussions with data centers is distributing massive (multi-terabyte) datasets required for training across geographically dispersed nodes. While technologies like RDMA (Remote Direct Memory Access - allowing direct memory access between networked computers) help within a data center, cross-data center training requires distributed storage solutions.
- Future Focus: Storage Solutions: Manifold plans to develop capabilities for hosting models and datasets directly on Targon, reducing reliance on third parties and supporting large-scale, distributed training efforts. This will be a focus in Q2.
Targon V5 to V6 Incentive Transition Plan
- Manifold outlines a gradual transition for miner incentives from the current Targon V5 to the TVM-based V6.
- Bootstrapping Phase: Initially, a 70/30 incentive split will favour miners running the new confidential compute nodes to encourage rapid adoption, acknowledging the difficulty of setting up these systems.
- Gradual Annealing: This boosted incentive will gradually decrease over two weeks (approx. 2.14% per day) to reach a new equilibrium, ensuring service continuity on platforms like OpenRouter while incentivizing the crucial shift to TVM. Rob reassures miners: "We're not trying to throw you all into the deep end."
Future Vision: Leveraging TVM with JEPA
- Rob pivots to a forward-looking vision, introducing JEPA (Joint Embedding Predictive Architecture) as a potential step-function improvement in AI, enabled by TVM.
- Critique of Auto-Regressive Models: He argues that current models like ChatGPT, while fluent, are fundamentally limited. They predict sequentially, leading to compounded errors and hallucinations, essentially acting as sophisticated compression algorithms of their training data rather than truly understanding or reasoning. "They don't actually know what they don't know," Rob asserts.
- Introducing JEPA: Proposed by Yann LeCun, JEPA aims to build internal world models by learning the dependencies between inputs (X) and outputs (Y) in a more abstract way, rather than just predicting the next token. It outputs an abstract representation (a tensor) that encodes this understanding.
- Potential for True Intelligence: This approach, Rob suggests, mirrors how humans and animals learn and navigate the world, enabling prediction and planning. He uses the AGI ARC challenge (a benchmark for abstract reasoning) as an example where current models fail but humans easily discern underlying rules (like counting holes).
- Agentic Systems & Cost Functions: JEPA can be used in agentic frameworks that incorporate not just reward maximization but also cost minimization, crucial for robust real-world operation (like a robot avoiding actions that could damage it). This involves predicting future states and associated costs at different time scales.
- The Targon/BitTensor Advantage: TVM provides the secure, scalable, confidential compute environment needed to train and deploy potentially massive JEPA models exclusively within the BitTensor ecosystem, specifically on Subnet 4 (Targon). This creates a unique value proposition and a potential moat.
Product Demos & Walkthroughs
- Rob provides brief live demos of the newly updated Manifold products.
- Tao.xyz Demo:
- Showcases the "Bloomberg terminal feel" with minimal scrolling.
- Demonstrates navigating subnets (e.g., Subnet 18 - Cortex), viewing delegation tables, identifying MEV bots (Miner Extractable Value - profit miners can make by reordering/inserting transactions) like "sandwiches," and tracking large transactions (>$500 Alpha).
- Highlights real-time updates and the ability to track specific wallets (though not shown logged in to avoid doxxing).
- Mentions upcoming features like better display of subnet parameters (Alpha in/out) and performance improvements (reducing load times from 60ms to 6ms).
- Targon.com Demo:
- Displays the new sleek design, emphasizing the unified UI/UX across Manifold products.
- Shows the playground for interacting with models, browsing available models, and getting code snippets.
- Reiterates that core functionality remains similar but the user experience is significantly improved.
- Sybil.com Demo:
- Shows the redesigned interface allowing model selection (R1, V3).
- Demonstrates the faster search speed with an example query ("how tall a cow is").
- Tao Wallet Availability:
- Confirms the wallet is available on the Chrome Web Store and the Firefox Add-ons store. Links were shared in the live chat.
Q&A and Deeper Dive into TVM Implications
- The discussion shifts to a Q&A format, delving deeper into TVM's mechanics and implications.
- TVM Validation Mechanism: Confirms that validators can run on low-power hardware like a Raspberry Pi because they only need to check cryptographic proofs (attestation reports) generated by the miners' secure hardware, not perform heavy computation themselves.
- Miner Role & Hardware Optimization:
- With TVM, miner competition shifts from pure software/speed optimization (like in V5) to hardware optimization.
- Josh Brown (Manifold CTO) explains that TVM verifies the presence and capability of confidential compute hardware (specific CPU/GPU requirements) but allows miners to innovate on surrounding infrastructure (storage speed, network latency like InfiniBand, cluster configurations).
- "This almost becomes hardware optimization instead of software optimization," Josh states, adding, "We're incentivizing hardware level optimizations." Validators can measure and reward superior hardware setups (e.g., fast storage, low-latency interconnects across multiple H200 nodes).
- Enterprise Adoption & Security Guarantees:
- TVM's verifiable confidentiality is positioned as a major advantage over trusting centralized providers like OpenAI. "Don't trust, verify," becomes the mantra.
- Josh emphasizes that customers prioritize privacy and consistency over raw speed. TVM allows Manifold to prove data privacy and integrity, addressing a key enterprise barrier.
- Performance Overhead of Confidential Compute: Acknowledges there might be some performance overhead compared to non-confidential compute, but argues the trade-off for security, privacy, and consistency is highly valued by users. Speed can still be optimized via hardware (e.g., incentivizing H200s over H100s). The shift also solves issues with miners gaming the V5 system by spamming requests.
- Future Monetization Strategies:
- Immediate: Turn on paid, private inference tiers on OpenRouter next week, leveraging TVM's security guarantees.
- Mid-Term: Train proprietary, high-performance models (starting with 32B, scaling up to ~800B) accessible only via Targon, creating exclusivity and capturing value within the ecosystem. Rob uses the analogy: "We want to make our own flavor of models... and keep that within inside of Targon."
- Long-Term (JEPA): If successful with JEPA, the applications could be vast and highly valuable, potentially creating superintelligence deployable for complex tasks.
- Man-in-the-Middle Attack Considerations:
- Acknowledges a theoretical MITM possibility between OpenRouter and the Targon Hub API (though HTTPS mitigates much of this).
- Direct interaction with Targon.com offers stronger end-to-end guarantees.
- The question of cryptographically proving Targon Hub itself isn't saving data is raised as an area for future consideration, aligning with the "don't trust, verify" principle.
- TVM SDK and Compute Access:
- An SDK for developers to utilize TVM compute is planned for release shortly (1-2 weeks), once the CVM (Confidential Virtual Machine) compilation and distribution process for miners is finalized (Targeted for v6.1).
- Users won't rent bare metal directly but will submit workloads (akin to defining a Docker image) that get packaged into a CVM and run securely on miner hardware. Payment will likely be via targon.com.
- SSH Access and CVM Customization:
- Technically, a user could create a CVM configured for SSH access, embedding their keys. This CVM would run as a full, isolated virtual machine on the miner's hardware.
- Validators can still verify the CVM is running correctly via attestation reports without needing SSH access.
- Crucially, the miner providing the hardware has no access (SSH or otherwise) into the running CVM, ensuring isolation. "It actually firewalls the machine off to them," confirms Josh.
- Security Guarantees (Hardware-Level):
- Confidence in TVM security stems from hardware-level guarantees provided by CPU vendors (Intel, AMD) and Nvidia. Keys and attestation mechanisms are baked into the silicon (e.g., via specialized chips or secure enclaves).
- Manifold worked directly with Nvidia engineers, indicating the state-of-the-art nature of this implementation. Josh notes, "This is going to be probably the one of the first times this is actually deployed at scale."
JEPA Revisited & Future Ambitions
- Rob reiterates his conviction in JEPA and the transformative potential unlocked by TVM.
- Training Proprietary Models: The immediate goal is training unique auto-regressive models exclusive to Targon, building a competitive moat.
- Long-Term Vision: AGI and Real-World Applications: Rob paints a picture of using a future superintelligent JEPA model (running on Targon's global, confidential compute network) to solve complex real-world problems, like automating manufacturing ("manufacture me everything").
- Conviction: "I'm telling you right now that this is what we're going to accomplish... And now with TVM I finally have the compute to be able to pull it off."
Concerns: Delegated Stake and Subnet Security
- Rob raises a new concern stemming from TVM's potential success: the risk associated with BitTensor's delegated stake model.
- Value Creation: TVM could make Subnet 4 extremely valuable, attracting large entities (corporations, potentially even nation-states) seeking access to its unique confidential compute capabilities.
- Takeover Risk: These entities might try to acquire large amounts of TAO (BitTensor's native token) to gain majority stake/control over the subnet, potentially centralizing access or directing its resources. Rob frames delegated stake as potentially "one of the greatest mistakes on BitTensor" in this context, highlighting a future governance challenge.
Concluding Remarks
- The session wraps up with enthusiasm for Manifold's releases and future direction. Jake (the host) expresses amazement at the scope and technical depth of TVM, comparing its potential impact to that of an entirely new top-100 cryptocurrency project built within BitTensor. Manifold is positioned as being at the "tip of the spear" for implementing large-scale, verifiable confidential compute for AI.
Reflective and Strategic Conclusion
- Manifold's TVM introduces verifiable confidential compute, positioning Targon to compete with AI giants and pursue novel models like JEPA. Crypto AI investors/researchers must track TVM adoption and its potential to create exclusive, high-value AI services, while also monitoring the emerging governance challenges around securing highly valuable subnets.