Hash Rate pod - Bittensor $TAO & Subnets
October 8, 2025

Hash Rate - Ep 136 - Hone ($TAO Subnet 5) Chasing AGI

Robert Meyers from Hone (Bittensor Subnet 5) joins the podcast to unpack their audacious goal: chasing Artificial General Intelligence (AGI). Hone isn’t building another large language model; it’s a high-stakes R&D project to crack the code of true machine generalization.

1. Cracking the AGI Code

  • “If we can produce a model that can generalize in this way, then that's going to be a sort of step function in AI and that's going to be a huge advantage for us.”
  • Hone is focused on solving the Abstraction and Reasoning Corpus (ARC-AGI) benchmark, a test designed to be simple for humans but nearly impossible for current AI. The goal is to move beyond pattern matching to achieve true, intuitive reasoning.
  • Today’s most advanced models, including ChatGPT, score a dismal 5% on ARC-AGI. They are brittle and fail when presented with novel problems outside their massive training datasets, or what the host calls "sentience exhaust."
  • Solving this benchmark would be a monumental achievement, not just for the prize money ($750k), but for the bragging rights of creating the world's first AI capable of genuine generalization.

2. A New Architecture for Thinking

  • “LLMs work by predicting the next word... It doesn't really understand anything, which is why if you leave the training data in any way, all of a sudden you're through the looking glass.”
  • Hone is moving away from the transformer architecture that powers today’s LLMs. Instead, it’s exploring models inspired by Yann LeCun’s Joint Embedding Predictive Architectures (JEPA).
  • Unlike LLMs that predict the next word, JEPA-style models are trained to fill in the middle of a data sequence. This multimodal approach—fusing inputs like images and text—helps the model build a more complete, contextual understanding of the world, much like an animal.
  • The objective is to create an AI that can make intuitive leaps (e.g., knowing how to slice a pear after seeing an apple sliced once) without needing a billion examples, a fundamental limitation of current systems.

3. The Bittensor Advantage

  • “The very powerful thing about Bittensor is that through an incentive mechanism, we can have people from all across the world work on this problem and be rewarded for incrementally improving the performance.”
  • Bittensor provides the perfect arena for this moonshot. Its incentive structure turns the AGI problem into a global, permissionless competition, rewarding any miner who can incrementally improve performance.
  • Hone deliberately allows miners to submit any architecture. This open approach avoids top-down dogma and lets the network's collective intelligence discover the best solution through ferocious, 24/7 competition.

Key Takeaways:

  • Hone is a pure R&D bet on architectural innovation over data brute force. It leverages Bittensor’s uniquely competitive environment to tackle a problem that has stumped even the world’s biggest AI labs. Success wouldn't just be an iteration; it would be a paradigm shift.
  • An AGI Moonshot, Not an LLM Factory: Hone’s singular focus is solving the ARC-AGI benchmark to achieve true generalization. This is a high-risk, high-reward play for a step-function leap in AI, not just another incremental improvement.
  • Architecture Over Data: The strategy is to out-innovate, not out-collect. By exploring novel architectures like JEPA, Hone aims to create models that think more efficiently and don't depend on ever-expanding datasets, sidestepping the data moat of centralized giants.
  • The Business Model is the Breakthrough: There is no immediate revenue. The investment thesis is straightforward: solve AGI, earn the ultimate bragging rights, and then monetize the world’s first truly intelligent model through distribution partners like Targon.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals the high-stakes pursuit of Artificial General Intelligence (AGI) on Bittensor, detailing how Hone (Subnet 5) is creating a decentralized moonshot to crack the code of true machine generalization.

Introduction to Hone and the AGI Problem

Robert Meyers introduces Hone as Subnet 5 on Bittensor, a project singularly focused on solving the ARC AGI benchmark. This benchmark, created by the ARC Foundation and associated with AI researcher Francis Chollet, includes a $750,000 prize and is designed to test for genuine artificial general intelligence. Robert explains that Hone's mission is to create a framework that allows for novel mechanisms to tackle this notoriously difficult problem, moving beyond the limitations of current AI models.

The Challenge of True Generalization

The conversation clarifies the core difference between current AI and true AGI. While Large Language Models (LLMs) like ChatGPT are powerful at identifying new connections within their vast training data, they struggle with true generalization. They cannot make the kind of intuitive leaps humans do, such as applying the knowledge of slicing an apple to slicing a pear without prior examples. The ARC AGI test is specifically designed to measure this capability, presenting problems that are easy for humans but extremely difficult for current AI, with top models scoring only around 5%.

  • Actionable Insight: Investors should recognize that AGI is not an incremental improvement on LLMs but a fundamentally different paradigm. Projects like Hone represent high-risk, high-reward investments in a new architectural direction for AI.

Limitations of Current Transformer Models

Robert explains that the dominant AI architecture, the Transformer (the foundation for models like ChatGPT), is inherently limited. He uses the analogy of a Gaussian distribution, or a normal curve, to describe their knowledge base. These models perform well on data within this curve but fail when presented with novel information or edge cases that fall outside of it. This limitation is why data collection is an endless race for companies like OpenAI, as each new piece of information expands their model's performance boundary.

  • Robert Meyers notes, "A human doesn't need to see a billion examples of how to drive a car or how to slice an apple, right? We can see it once or twice... that little leap is like really hard for AI to make right now."

JEPA: A New Architecture for Understanding

Robert introduces an alternative approach inspired by Yann LeCun, Meta's chief scientist: JEPA (Joint Embedding Predictive Architectures). Unlike transformers that predict the next word in a sequence, JEPA learns by predicting missing information from multiple data types simultaneously (e.g., image and text). This "fill-in-the-blank" method, applied across different modalities, helps the model build a more robust, contextual understanding of the world, similar to how a human fuses different senses to perceive their environment.

  • Strategic Implication: The development of non-transformer architectures like JEPA is a critical trend. Researchers and investors should monitor these alternative approaches, as they could unlock capabilities like generalization that current models lack.

Hone's Decentralized Strategy on Bittensor

Robert emphasizes that Bittensor's core strength is its ability to harness global, decentralized talent through incentives. Instead of forcing his own architectural theories, he designed Hone as an open competition. The subnet allows anyone to submit a model of any architecture to solve the ARC AGI problem. This permissionless structure allows for a Cambrian explosion of experimentation, where the network itself discovers the best solution.

  • Key Insight: Bittensor's incentive mechanism transforms a centralized research problem into a global, continuous competition, potentially accelerating breakthroughs far faster than a single, closed lab could.

Creating a Framework for Rapid Iteration

To facilitate this competition, Hone has developed its own private holdout dataset. This allows the subnet to rapidly test and iterate on miner-submitted models without overwhelming the official ARC Foundation submission process. The system is designed to reward any incremental improvement, creating a constant upward pressure on performance. The goal is to identify a top-performing model through this internal process and then submit it to the official ARC AGI competition.

  • Actionable Insight: Hone's strategy of creating a private, iterative testing environment is a clever use of the Bittensor framework. This model could be applied to other complex AI benchmark problems, creating new opportunities for decentralized research.

The Business Model: A High-Stakes R&D Play

When asked about the business model, Robert is transparent that Hone is currently in a pure research and development phase. The immediate financial goal is to win the $750,000 ARC AGI prize, which would be used to buy back the subnet's native token. However, the true value lies in the breakthrough itself. Successfully creating a model that can generalize would represent a monumental step-function in AI, and the underlying techniques would be incredibly valuable and easily monetizable through partners like Targon.

Confronting the Data Moat Problem

Robert addresses a critical challenge for all decentralized AI projects: the "data moat" of centralized giants. Using the analogy of Tesla's Full Self-Driving (FSD) versus George Hotz's open-source OpenPilot, he explains that companies with massive user bases collect more edge-case data, creating a feedback loop that continually improves their models. This makes it difficult for open-source alternatives to compete on performance.

  • Robert states, "The data rich get data richer... I worry about the pre-training solutions... running into this issue."

Hone's Strategic Solution: Bypassing the Data Problem

Hone's ultimate strategy is not to compete on data volume but to change the game entirely. By focusing on creating a model that can generalize from a small amount of data, it sidesteps the need for a massive, proprietary dataset. If successful, a generalizing model would inherently understand edge cases without needing to have seen them before, neutralizing the primary advantage held by centralized AI companies.

  • Strategic Implication: Generalization is the key to breaking the data monopolies of Big Tech. Investors should look for projects that are not just trying to build a decentralized version of existing models but are developing fundamentally new architectures that require less data.

The Power of Key Backers: Manifold and Latent

Robert highlights the strategic importance of Hone's backers. Manifold Labs (the team behind the DePIN platform Targon) provides a clear path to distribution and monetization for any successful technology developed on the subnet. Latent Holdings, led by JJ, offers deep expertise in open-source governance and development, providing the structural knowledge needed to manage a complex, decentralized research project.

Bittensor as a Pure Meritocracy

The conversation concludes by reflecting on the unique power of Bittensor's ecosystem. It is described as a "pure meritocracy" where code and performance are the only metrics that matter. This environment attracts exceptional, highly motivated talent from around the world who might otherwise be at large AI labs. This relentless, 24/7, permissionless contribution engine is what gives projects like Hone a fighting chance at solving problems that have stumped the world's largest tech companies.

Conclusion

This episode highlights Hone's ambitious strategy to achieve AGI by incentivizing novel architectural research on Bittensor, aiming to bypass the data-intensive limitations of current AI. For investors and researchers, Hone represents a high-risk but potentially transformative bet on the power of decentralized collaboration to solve AI's grandest challenges.

Others You May Like