This episode unveils Shakeel Hussein's audacious plan with Ridges AI on Bittensor: to make human software engineers obsolete by decentralizing AI agent development and creating a competitive ecosystem for automated coding.
Shakeel's Introduction & Ridges AI's Rapid Rise
- Shakeel Hussein, creator of Ridges AI (Subnet 62 on Bittensor), discusses the intense recent period of rebuilding his team and the project.
- He expresses surprise at the rapid community recognition Ridges AI has received, attributing their progress to a relentless focus on development.
- Shakeel: "It feels cool to like ship. I'm surprised people noticed it this quickly."
- Strategic Implication: Rapid iteration and visible progress can quickly build credibility and traction in the fast-moving Crypto AI space, even for revamped projects.
From AgentTal to Ridges AI: A Founder's Pivot
- Shakeel recounts the origins of AgentTal, started with friends pre-Detail. Detail (Dynamic TAO) refers to a significant update in the Bittensor ecosystem that changed tokenomics and incentive structures.
- The Detail update disrupted AgentTal's incentive mechanism, leading to the project's temporary abandonment.
- Shakeel, believing in the core idea, bought out his partners and relaunched the project as Ridges AI, hiring a new team of Waterloo interns.
- Investor Insight: Founder conviction and adaptability are crucial, especially when navigating major ecosystem shifts like Bittensor's Detail, which can invalidate previous models.
Building Credibility Through Execution and Speed
- Shakeel emphasizes that shipping functional code is the primary way to build credibility within the Bittensor community.
- Ridges AI surpassed its growth targets, achieving rank 25 much faster than the initial goal of top 40 by end of June.
- The most challenging aspect of onboarding new team members was instilling a mindset focused on incentive design and anticipating how users might try to exploit the system. The core codebase was rewritten in just three days.
- Researcher Note: The focus on "incentive design" is paramount in decentralized AI, requiring developers to think adversarially from the outset.
Evolving Incentive Design: From Query-Response to Public Agent Code
- Ridges AI is transitioning its incentive mechanism. The current model involves validators sending coding problems to miners, who respond with a
git diff
(a textual representation of changes between two versions of a file, commonly used in version control systems like Git to apply patches to a codebase).
- The new, upcoming model will require miners to publicly post their agent code. Validators will then run this code themselves to evaluate its performance.
- Shakeel explains this shift: "Miners are actually going to um like post all of their agent code publicly... anyone can see the minor agent code and anyone can run it by themselves if they want."
- Strategic Implication: Open-sourcing miner agents can enhance transparency and trust, allowing users with proprietary code to run verified agents locally, addressing privacy concerns.
Task Specialization: The Key to Reliable AI Software Engineering
- A core part of the new incentive mechanism is splitting software engineering into narrow, specialized tasks for AI agents.
- This approach addresses the unreliability of large language models (LLMs) in complex, multi-step coding processes. LLMs (Large Language Models) are AI models trained on vast amounts of text data to understand and generate human-like language and code.
- Shakeel notes that while LLMs are good at writing code snippets, end-to-end software engineering is iterative and prone to compounding errors (the "90% effect": 90% accuracy over 100 steps results in near 0% overall success).
- By having miners specialize in specific steps (e.g., context selection, code generation for a defined scope), Ridges AI aims to improve the reliability of each step significantly.
- Investor Insight: Specialization in AI agent tasks, rather than aiming for a single monolithic AI coder, presents a more pragmatic and potentially faster path to reliable automated software development.
Problem Assignment and Synthetic Queries
- Miners will register their agents for specific task types, such as "context solution" (identifying the relevant parts of a codebase for a given problem).
- Validators will generate synthetic problems tailored to these specialized agent types, using LLMs and an open-source framework.
- For instance, a code generation agent would receive the problem statement, the entire codebase (if needed), and the specific context files it's expected to modify.
- Researcher Note: The generation and validation of synthetic data for training and evaluating specialized AI agents is a critical area of development for such platforms.
Path to Commercialization: From Specialized Tasks to Organic Queries
- Shakeel believes that perfecting narrow, specialized AI agent tasks is the first step towards commercialization.
- These specialized agents can be offered as services for tedious software engineering jobs, like writing unit tests. Unit tests are small pieces of code written to verify that individual components or functions of a larger software system work correctly.
- The goal is to create "small narrow software engineers" that excel at specific tasks, which can be sold before a full end-to-end AI software engineer is viable.
- Actionable Insight: Investors should look for platforms demonstrating early revenue potential from specialized AI services, as this indicates product-market fit for individual components of a larger vision.
Scaling Task Types: Differentiating "Hard" and "Soft" Tasks
- Adding new task types requires developing reliable synthetic generation and evaluation loops. Shakeel categorizes tasks as "hard" or "soft."
- Hard tasks are well-defined and objectively measurable, such as context selection (e.g., "Have they found the right files? Have they found the right functions... and have they found the right lines?"). These are easier to implement evaluation for initially.
- Soft tasks, like assessing overall code quality, are more subjective and challenging to evaluate automatically.
- For soft tasks, Ridges AI plans to initially use a "semi-good pipeline" and eventually foster a market where "orchestrator agents" select and pay other agents based on performance and reputation for these tasks.
- Researcher Note: The distinction between hard and soft tasks highlights a key challenge in AI evaluation. Hybrid approaches combining automated metrics with market-driven or human-in-the-loop validation may be necessary for complex AI capabilities.
The Orchestration Layer and Agent Marketplace
- Shakeel envisions an "orchestration layer" where agents (potentially specialized orchestrator miners) piece together various specialized agents to solve complex problems.
- He believes miners will ultimately be more effective at developing these orchestrator models than Ridges AI's core team.
- Ridges AI plans to develop the foundational tooling and "communications protocol" to enable seamless handoffs and output verification between different agents in this marketplace.
- Strategic Implication: The development of a robust agent marketplace and orchestration layer is critical for scaling decentralized AI solutions and moving beyond single-task agents.
Commercialization Strategy: Aiming for Asynchronous AI Software Engineers
- Shakeel contrasts Ridges AI's approach with tools like Cursor or Windmill (which he refers to as Windsurf), describing them as synchronous pair-programming aids.
- Ridges AI aims for a model more akin to Devon, an AI software engineer that can work asynchronously on tasks.
- Potential product interfaces include GitHub extensions (e.g., "@ridges fix this for me") or Slack integrations where users can delegate tasks to the AI.
- Investor Insight: The market demand for AI coding assistants is high. Platforms offering more autonomous, "virtual software engineer" capabilities could capture significant value if they can overcome reliability and integration challenges.
Addressing Private Repositories and Security Concerns
- The new model of miners publishing their agent code publicly is designed to address security and privacy concerns associated with private codebases.
- Companies can inspect the agent code themselves.
- Ridges AI is developing sandboxing tools to ensure miner-submitted code runs in an isolated environment, unable to access the broader system. Sandboxing is a security mechanism for separating running programs, usually to mitigate system failures or software vulnerabilities from spreading.
- Crucially, companies can self-host and run these public agents on their own infrastructure, maintaining full control over their proprietary code.
- Actionable Insight: Solutions that allow enterprises to leverage decentralized AI without exposing proprietary data are key for broader adoption. Self-hosting and verifiable open-source agents are strong selling points.
Development Philosophy: Speed, Adaptability, and Short Timeframes
- Shakeel states the public agent code model is about "a week and a half, maybe two weeks away" from rollout, underscoring their rapid development pace.
- He views speed and quality of execution as Ridges AI's primary competitive advantages, as they are not building proprietary AI models.
- Shakeel: "Our advantage is just speed of execution and quality of execution... I prefer to like operate on like maybe like a month time frames and just adjust as needed."
- Strategic Implication: In the rapidly evolving AI landscape, agility and the ability to iterate quickly on incentive mechanisms and product features are more valuable than long-term, rigid plans.
Relevance Amidst Ever-Larger AI Models
- Shakeel argues that even massive future AI models (e.g., "10 trillion parameter model") won't render specialized tooling and agent ecosystems obsolete.
- He points to the "90% effect" – the difficulty of achieving high reliability in multi-step processes even if individual steps are mostly accurate.
- Current advanced models still require significant tooling to ensure they don't "hallucinate" (generate incorrect or nonsensical information) and can apply their capabilities consistently in mission-critical scenarios.
- Researcher Note: The development of robust tooling, verification methods, and compositional AI systems remains critical, regardless of raw model size, for practical and reliable AI applications.
Shakeel Hussein's Background and Core Philosophy
- Shakeel began his software engineering journey at 16, initially to automate his trading strategies.
- He transitioned from trading to building AI software solutions because he saw a larger opportunity in "doing the replacing" of traditional software engineering roles, a field he believes is ripe for AI-driven disruption.
- His experience at Supabase (an open-source Firebase alternative) taught him the critical importance of execution speed and superior developer experience in competitive markets.
- Shakeel's core belief: "My experience have shown that if I try like hell to do something probably I can get it done."
- Speaker Analysis: Shakeel's perspective is characterized by a relentless drive for speed, a pragmatic approach to building (iterating quickly and fixing mistakes), and a deep understanding of incentive structures derived from his trading background.
Entry into the Bittensor Ecosystem
- Shakeel was introduced to Bittensor by a former colleague from his time at Twitter.
- He was immediately drawn to Bittensor's focus on incentive mechanisms, which resonated with his career in trading and algorithmic systems.
- Shakeel describes Bittensor as "leveraging human nature to the max, just using incentives to try to get like crazy outputs."
- Context: Bittensor is a decentralized network that aims to create a market for artificial intelligence, where "miners" contribute machine learning models and are rewarded based on their performance as evaluated by "validators."
Governance, Incentives, and Bittensor's Challenges
- Shakeel briefly touches upon an old essay he wrote about governance, suggesting voting power could be weighted by expertise and incentives, though he acknowledges the immense difficulty in implementing such a system fairly.
- He relates this to challenges within Bittensor, such as the Yuma Consensus (a mechanism in Bittensor for human-based validation and governance) and the potential for perverse incentives among validators.
- Researcher Note: Governance and incentive alignment remain open research problems in decentralized systems, including those focused on AI.
Dynamic TAO (Detail) and the Imperative for Revenue
- Regarding Bittensor's "Detail" (Dynamic TAO) updates, Shakeel emphasizes a critical takeaway for subnet owners: the need to generate external revenue quickly.
- He argues that subnets cannot sustainably fund operations by selling their emitted tokens, as this creates downward pressure on their token price, harming miner incentives and overall project health.
- Shakeel: "If I as a sub owner want to be dumping my emissions to fund my team that pushes my price down which pushes my emissions down which pushes the monor minor quality down."
- Investor Insight: Subnets on Bittensor (and similar platforms) that demonstrate a clear path to external revenue and a tokenomics model that rewards value creation beyond emission farming are likely to be more sustainable long-term investments.
The Ultimate Vision for Ridges AI (Subnet 62)
- Shakeel's boldest ambition for Ridges AI is unequivocal: "Software engineers are gone and entirely replaced by agents on our platform. All of them 100%. Everyone gone."
- He estimates this reality could be one to two years away, contingent on building the necessary tooling around already capable AI models.
- Strategic Implication: This highlights the disruptive potential envisioned by leaders in the Crypto AI space, aiming for complete automation of complex knowledge work.
The Tooling Gap: What's Stopping Full Software Engineering Automation?
- Shakeel asserts that current AI models are largely "good enough" to perform many software engineering tasks. The primary bottleneck is the lack of robust tooling.
- This tooling includes systems for deployment, managing code dependencies, fixing bugs generated by other LLMs, and providing a reliable interface for non-technical users to direct the AI.
- He cites the rapid growth of companies like Bolt and Lovable (achieving significant Annual Recurring Revenue - ARR - quickly) as evidence of the strong market demand for AI tools that abstract away parts of the software development process.
- Actionable Insight: Investment in companies building the "picks and shovels"—the tooling and infrastructure—for AI-driven software development could yield significant returns, as this layer is crucial for unlocking the capabilities of current and future AI models.
Evaluating Subjective Tasks: Moving Beyond ELO Systems
- Ridges AI previously used an ELO system (a rating system originally designed for chess, adapted here to rank code patches based on LLM comparisons) for "soft tasks" like evaluating front-end code aesthetics. Shakeel admits this was "very brittle."
- The new approach focuses on evaluating agents on "small hard tasks" (e.g., context selection, unit test generation) with objective metrics.
- Soft tasks will be reintroduced later, with orchestrator agents selecting and paying specialized "soft task agents," effectively creating a free market for subjective evaluations.
- Researcher Note: The shift from direct, algorithmic evaluation of subjective qualities to market-based mechanisms for "soft tasks" is an interesting development, potentially offering a more scalable and robust solution.
Fostering Miner Collaboration and Specialization
- Shakeel wants to encourage miners to act as "orchestrator agents," who don't necessarily develop all solutions themselves but instead find and combine the best existing specialized agents.
- Ridges AI will provide standardized tooling to facilitate these interactions, but the strategy of how to string agents together will be up to the orchestrator miners to optimize.
- Strategic Implication: This promotes a modular and competitive ecosystem where innovation can occur at multiple levels—both in specialized agent development and in the art of orchestrating these agents.
Key Challenges in the Redesign: Security and Standardization
- The most difficult problems encountered during Ridges AI's recent redesign involved onboarding the team to think in terms of incentives and designing new, secure systems from scratch.
- A major focus was standardizing how agent code is published and run, particularly the development of a custom sandboxing solution.
- Shakeel: "How do you design a standardized way that is open enough that allows miners to innovate... but at the same time, we don't want them to be able to break out of that environment."
- This custom sandbox software limits agents' internet access (only allowing inference/embedding calls via Ridges AI's platform) and enables validators to run many sandboxed agents concurrently.
- Investor Insight: Robust security measures, including effective sandboxing and standardized agent interaction protocols, are non-negotiable for platforms handling potentially sensitive code and operations.
Anticipating Scaling Issues and Competitive Landscape
- Shakeel acknowledges that scaling the platform, particularly managing inference costs and preventing denial-of-service attacks (e.g., agents running infinite loops), is an ongoing challenge.
- He views competitors as both quiet, smaller companies building end-to-end solutions internally, and more visible players like Devon. He also anticipates that tools like Cursor and Windmill might expand into asynchronous agent capabilities.
- Researcher Note: The economics of inference and the security of decentralized compute networks are critical research areas that will determine the viability and scalability of platforms like Ridges AI.
Tokenomics: Revenue Share as the Core Flywheel
- Shakeel has moved away from complex tokenomics ideas, now focusing on a straightforward model: driving platform revenue and sharing it with the miners who contribute the value-creating agents.
- The new public code incentive mechanism itself is designed as a flywheel:
- Dominant agents' code is public.
- Other miners can take this code and improve upon it to earn rewards.
- This forces constant innovation and improvement, potentially leading to "agents writing agents."
- Actionable Insight: Tokenomics models directly tied to real revenue generation and value creation, rather than speculative mechanisms, are more likely to foster sustainable and healthy ecosystems.
Reward Distribution in the New System: Favoring Top Performers
- The reward distribution under the new model will be more exponential, with the top cluster of performing agents receiving the bulk of the rewards.
- The "decay function" for a top agent's dominance is the constant threat of another miner improving its public code and taking its place.
- Strategic Implication: Such a competitive dynamic, while potentially harsh, can drive rapid advancements in agent capabilities if structured correctly.
Shakeel's Advice to His Younger Self
- When asked for advice to his younger self, Shakeel's answer reflects his core philosophy: "To be honest, I probably just moved faster."
- He emphasizes taking risks and maintaining focus on "going all in all the time."
- Speaker Analysis: This reinforces Shakeel's consistent theme of speed and decisive action as key drivers of success in high-innovation environments.
Conclusion
Shakeel Hussein's Ridges AI is aggressively pursuing the automation of software engineering on Bittensor by fostering a competitive, open ecosystem of specialized AI agents. Investors and researchers should monitor the development of its public agent registry and incentive mechanisms for task specialization, as these could pioneer new models for decentralized AI development and commercialization.