Opentensor Foundation
August 18, 2025

Subnet 56 :: Gradients :: Bittensor End-to-end AI Model Training Suite

Subnet 56, Gradients, is building an end-to-end AI model training suite on Bittensor that isn’t just competing—it’s empirically proving itself as the best-performing training platform on the planet. This deep dive explores their latest technical advancements and a strategic pivot to an open-source model designed to unlock enterprise adoption and sustainable revenue.

Outperforming the Titans

  • "The answer... is pretty clear, it's Gradients. There's no real competition here. It doesn't matter what you chuck at this—translation, math, code, reasoning—Gradients is better."
  • "If you're going to hire a machine learning engineer for $150k a year... you don't need to employ them anymore. You can just run this on Gradients."
  • In a massive experiment running over 180 model-dataset tests, Gradients outperformed established platforms like DataBricks, GCP, and Hugging Face in 83% of cases. The harder the problem, the wider the performance gap.
  • The platform now offers advanced training techniques like DPO (for aligning with human preferences) and GRPO (for custom, complex reward functions), putting state-of-the-art research tools directly in users' hands.
  • Their own fine-tuned model, Gradient Instruct 8B, already beats Qwen2-7B-Instruct—one of the world's top models—on zero-shot benchmarks, demonstrating the raw power of their decentralized training network.

The Open Source Gambit

  • "What's the difference between [a dominant proprietary miner] and OpenAI, really? It's not that different... The whole idea of Bittensor is that it's kind of like open intelligence."
  • The biggest hurdle for enterprise adoption wasn't performance; it was data privacy. Customers were unwilling to send sensitive data to anonymous, "black box" miners.
  • Gradients 5.0 solves this by shifting to an open-source model. Miners now submit their training scripts, which are run by validators in a secure, controlled environment. This allows customers to see exactly what code is touching their data.
  • This creates a powerful innovation flywheel. The best techniques are no longer siloed; they become the new baseline for all miners to build upon, accelerating the network's collective intelligence at a pace traditional companies can't match.

Igniting the Revenue Flywheel

  • "The whole game here is to take the stuff that miners have been doing... move it into an open-source format, and serve that to customers as quickly as possible. That is our reason for existence at the moment."
  • With the open-source model, Gradients can offer a verifiable, best-in-class training service to enterprise clients, directly addressing their security concerns and paving the way for sustainable revenue beyond TAO emissions.
  • Organic traffic is already growing, with a focus on smaller, efficient models (1B to 8B parameters) that are more practical for specific business needs than massive, costly foundation models.
  • The long-term vision includes deep integration with other Bittensor subnets, leveraging decentralized compute networks like Shoots to run training jobs, creating a cost-effective, symbiotic ecosystem.

Key Takeaways:

  • Gradients has validated its technical supremacy and is now executing a brilliant strategic pivot to open-sourcing its core mechanism. This move is designed to dismantle the final barrier to enterprise adoption—data privacy—and transform the subnet into a self-sustaining, revenue-generating powerhouse.
  • Performance is Proven, Not Promised. Gradients isn't just making claims; it’s delivering benchmark-crushing results, consistently outperforming centralized incumbents and producing state-of-the-art models.
  • Open Source Unlocks the Enterprise. The shift to verifiable, open-source training scripts is a direct solution to customer data privacy concerns, turning a critical vulnerability into a competitive advantage.
  • The AutoML Flywheel is Spinning. The network's competitive, tournament-style mechanism creates a self-optimizing system that continuously aggregates the best training techniques, ensuring it remains at the cutting edge.

Link: https://www.youtube.com/watch?v=GZyI8q0Ky2U

This episode reveals how Gradients, a decentralized AI training platform on Bittensor, is not only outperforming giants like Google and Databricks but is now open-sourcing its core technology to accelerate innovation and capture the market.

Introduction: The Impossible Problem

  • The host, Const, introduces Gradients (Subnet 56) by framing it as a solution to a problem he initially believed was "impossible." He describes the subnet as a meta-mechanism that creates a market for training AI models that, in turn, train other AI models.
  • Const highlights the significance of this achievement, noting that the Gradients team has "absolutely blown it out of the water," proving that a decentralized network can successfully tackle such a complex challenge.
  • Quote: "I literally thought that this problem was impossible to do... I actually kind of chuckled in my head when it was first suggested." - Const

Gradients 101: The End-to-End AI Training Suite

  • Wandering Weights, representing the Gradients team, explains the platform's core function: taking a base model that understands language but lacks specific intelligence and fine-tuning it to become useful for human applications.
  • Text Model Training: Users can select any model and dataset from Hugging Face, map the input and output columns (e.g., instruction and answer), set a training duration, and launch the job to the network of miners.
  • Diffusion Model Training: The platform also supports image model training. Users can upload images, and the system can now auto-caption them, simplifying the process of training models on custom concepts (e.g., generating images of a specific person like "Const").

Team and Development Velocity

  • The speaker emphasizes the team's research-heavy background, with publications in top-tier AI conferences like NeurIPS and ICML. However, he frames this experience as a proxy for a more important trait: hunger.
  • This drive has resulted in rapid development, with five major releases in just eight months, culminating in nearly half a million lines of code and 4,000 commits.
  • Actionable Insight: The team's aggressive development pace and deep research credentials signal a strong execution capability, a key positive indicator for investors evaluating the project's long-term potential.

Pushing the State-of-the-Art: DPO and GRPO

  • Gradients has moved beyond basic instruction-based fine-tuning by integrating cutting-edge alignment techniques.
  • DPO (Direct Preference Optimization): This feature allows users to train a model by providing it with pairs of "chosen" and "rejected" answers. DPO is a powerful technique for aligning a model's responses with human preferences, such as tone, conciseness, or safety, without needing a complex reward model.
  • GRPO (Group-wise Reward Preference Optimization): A more advanced and novel method pioneered by the creators of the DeepSeek models. GRPO allows users to define custom reward functions that score multiple generated answers, providing more nuanced feedback to the model.
    • Users can select from pre-defined reward functions or create their own, enabling highly customized training. For example, a reward function could be designed to ensure an LLM's output follows a specific format (e.g., "think" and "answer" brackets).
    • Strategic Implication: Gradients is one of only a few platforms in the world (alongside Databricks) to offer DPO and GRPO as a service. This positions the subnet at the forefront of AI training technology, attracting sophisticated users and researchers.

Performance Benchmarks: A "Game Set and Match" Moment

  • The team conducted an extensive study, training over 180 model-dataset pairs on Gradients and comparing the results against major platforms like Databricks, Google Cloud Platform (GCP), Hugging Face, and Together AI.
  • The metric for success was achieving the lowest loss on an unseen test set.
  • The results were definitive: Gradients outperformed all competitors across nearly every category, including translation, math, code, and reasoning. The only exception was against Hugging Face on tiny models, where results were within the margin of noise.
  • Quote: "When we got this piece of work put together, it was a bit of a wow moment for us where it's like this is game set and match, you know." - Wandering Weights
  • Actionable Insight: These benchmark results provide empirical evidence that Gradients is not just a decentralized alternative but a superior training platform. This performance edge is a critical differentiator for attracting paying customers and justifying its value proposition.

The Pivot to Open Source

  • Despite stellar performance, initial customer conversations revealed a major adoption barrier: data privacy. Enterprises were hesitant to send proprietary data to an anonymous network of miners. This, combined with the team's alignment with Bittensor's open-source ethos, led to Gradients 5.0.
  • The new model requires miners to submit their training scripts for open review. This allows customers to know exactly what code is being run on their data.
  • This shift aims to build trust, solve the data privacy issue, and accelerate innovation by allowing miners to learn from each other's techniques.

The Open-Source Tournament

  • To manage the open-source model, Gradients has implemented a World Cup-style tournament for miners.
  • Miners submit their code repository and hash. The top 16 miners (based on stake) are entered into the tournament.
  • They compete in groups and knockout stages, with their scripts being tested on a variety of tasks (Instruct, DPO, GRPO).
  • The winner must defeat the previous tournament's champion and is rewarded with a significant share of emissions.
  • Strategic Implication: This tournament structure gamifies innovation, creating a highly competitive environment that continuously pushes the state-of-the-art in automated model training (AutoML). The open-source nature ensures these advancements are shared, creating a powerful flywheel effect.

Major Breakthrough: Beating Qwen2 with Gradient Instruct 8B

  • The team announced a significant achievement: they fine-tuned a Qwen2 8B base model that outperforms the official Qwen2-Instruct model on zero-shot benchmarks, particularly in math and instruction following.
  • Zero-shot refers to a model's ability to perform a task without any examples provided in the prompt.
  • This was achieved using a custom dataset and the Gradients training process, demonstrating that the platform's optimization is superior even to that of a leading AI lab.
  • The model, "Gradient Instruct 8B," is available for public use.
  • Actionable Insight: Creating a model that beats a top-tier, state-of-the-art open-source model is a landmark achievement. It validates the entire Gradients stack and serves as a powerful marketing tool to attract both researchers and commercial clients. The next goal is to prove it is the best 8B model on the planet.

Deep Dive: Anatomy of an AutoML Script

  • In response to a question, Wandering Weights provides a look into a top miner's open-source script, revealing the complexity involved.
  • The script contains logic to make dynamic decisions based on the model and dataset, such as:
    • Packing: A technique to combine multiple inputs into a single batch to improve GPU efficiency.
    • Kernel Optimizations: Using custom kernels like Triton for faster training.
    • Parameter Selection: Deciding whether to use full fine-tuning or a parameter-efficient method like LoRA (Low-Rank Adaptation), which trains only a small subset of parameters.
    • Hyperparameter Tuning: Automatically selecting learning rates, batch sizes, and evaluation steps based on model size or specific model names (e.g., a custom learning rate for "German 29B models").
  • This complexity highlights the high barrier to entry for miners and underscores the value of the open-source model for new participants.

Conclusion: From Technical Superiority to Market Dominance

  • Gradients has proven its technical supremacy in AI model training and is now strategically addressing the final hurdles to enterprise adoption: data privacy and transparency.
  • The open-source model is designed to build trust and accelerate innovation, creating a clear path to revenue growth and market leadership.

Others You May Like