
Robert Meyers from Hone (Bittensor Subnet 5) joins the podcast to unpack their audacious goal: chasing Artificial General Intelligence (AGI). Hone isn’t building another large language model; it’s a high-stakes R&D project to crack the code of true machine generalization.
1. Cracking the AGI Code
2. A New Architecture for Thinking
3. The Bittensor Advantage
Key Takeaways:
For further insights and detailed discussions, watch the full podcast: Link

This episode reveals the high-stakes pursuit of Artificial General Intelligence (AGI) on Bittensor, detailing how Hone (Subnet 5) is creating a decentralized moonshot to crack the code of true machine generalization.
Introduction to Hone and the AGI Problem
Robert Meyers introduces Hone as Subnet 5 on Bittensor, a project singularly focused on solving the ARC AGI benchmark. This benchmark, created by the ARC Foundation and associated with AI researcher Francis Chollet, includes a $750,000 prize and is designed to test for genuine artificial general intelligence. Robert explains that Hone's mission is to create a framework that allows for novel mechanisms to tackle this notoriously difficult problem, moving beyond the limitations of current AI models.
The Challenge of True Generalization
The conversation clarifies the core difference between current AI and true AGI. While Large Language Models (LLMs) like ChatGPT are powerful at identifying new connections within their vast training data, they struggle with true generalization. They cannot make the kind of intuitive leaps humans do, such as applying the knowledge of slicing an apple to slicing a pear without prior examples. The ARC AGI test is specifically designed to measure this capability, presenting problems that are easy for humans but extremely difficult for current AI, with top models scoring only around 5%.
Limitations of Current Transformer Models
Robert explains that the dominant AI architecture, the Transformer (the foundation for models like ChatGPT), is inherently limited. He uses the analogy of a Gaussian distribution, or a normal curve, to describe their knowledge base. These models perform well on data within this curve but fail when presented with novel information or edge cases that fall outside of it. This limitation is why data collection is an endless race for companies like OpenAI, as each new piece of information expands their model's performance boundary.
JEPA: A New Architecture for Understanding
Robert introduces an alternative approach inspired by Yann LeCun, Meta's chief scientist: JEPA (Joint Embedding Predictive Architectures). Unlike transformers that predict the next word in a sequence, JEPA learns by predicting missing information from multiple data types simultaneously (e.g., image and text). This "fill-in-the-blank" method, applied across different modalities, helps the model build a more robust, contextual understanding of the world, similar to how a human fuses different senses to perceive their environment.
Hone's Decentralized Strategy on Bittensor
Robert emphasizes that Bittensor's core strength is its ability to harness global, decentralized talent through incentives. Instead of forcing his own architectural theories, he designed Hone as an open competition. The subnet allows anyone to submit a model of any architecture to solve the ARC AGI problem. This permissionless structure allows for a Cambrian explosion of experimentation, where the network itself discovers the best solution.
Creating a Framework for Rapid Iteration
To facilitate this competition, Hone has developed its own private holdout dataset. This allows the subnet to rapidly test and iterate on miner-submitted models without overwhelming the official ARC Foundation submission process. The system is designed to reward any incremental improvement, creating a constant upward pressure on performance. The goal is to identify a top-performing model through this internal process and then submit it to the official ARC AGI competition.
The Business Model: A High-Stakes R&D Play
When asked about the business model, Robert is transparent that Hone is currently in a pure research and development phase. The immediate financial goal is to win the $750,000 ARC AGI prize, which would be used to buy back the subnet's native token. However, the true value lies in the breakthrough itself. Successfully creating a model that can generalize would represent a monumental step-function in AI, and the underlying techniques would be incredibly valuable and easily monetizable through partners like Targon.
Confronting the Data Moat Problem
Robert addresses a critical challenge for all decentralized AI projects: the "data moat" of centralized giants. Using the analogy of Tesla's Full Self-Driving (FSD) versus George Hotz's open-source OpenPilot, he explains that companies with massive user bases collect more edge-case data, creating a feedback loop that continually improves their models. This makes it difficult for open-source alternatives to compete on performance.
Hone's Strategic Solution: Bypassing the Data Problem
Hone's ultimate strategy is not to compete on data volume but to change the game entirely. By focusing on creating a model that can generalize from a small amount of data, it sidesteps the need for a massive, proprietary dataset. If successful, a generalizing model would inherently understand edge cases without needing to have seen them before, neutralizing the primary advantage held by centralized AI companies.
The Power of Key Backers: Manifold and Latent
Robert highlights the strategic importance of Hone's backers. Manifold Labs (the team behind the DePIN platform Targon) provides a clear path to distribution and monetization for any successful technology developed on the subnet. Latent Holdings, led by JJ, offers deep expertise in open-source governance and development, providing the structural knowledge needed to manage a complex, decentralized research project.
Bittensor as a Pure Meritocracy
The conversation concludes by reflecting on the unique power of Bittensor's ecosystem. It is described as a "pure meritocracy" where code and performance are the only metrics that matter. This environment attracts exceptional, highly motivated talent from around the world who might otherwise be at large AI labs. This relentless, 24/7, permissionless contribution engine is what gives projects like Hone a fighting chance at solving problems that have stumped the world's largest tech companies.
Conclusion
This episode highlights Hone's ambitious strategy to achieve AGI by incentivizing novel architectural research on Bittensor, aiming to bypass the data-intensive limitations of current AI. For investors and researchers, Hone represents a high-risk but potentially transformative bet on the power of decentralized collaboration to solve AI's grandest challenges.