Hash Rate Podcast
January 2, 2026

Bittensor Brief #16: Bitsec Subnet 60

Bittensor Brief #16: BitSec Subnet 60 By Hash Rate Podcast

Author: Mark Jeffrey

Date: October 2023

Quick Insight: This summary breaks down how BitSec Subnet 60 uses competitive AI agents to fix the $200 billion cybersecurity failure. It is essential for builders who need to defend against AI-powered exploits at machine speed.

  • 💡 Why are traditional security models failing: to keep up with AI-generated code?
  • 💡 How does BitSec turn security auditing: into a competitive game for miners?
  • 💡 Can AI agents find vulnerabilities: that human specialists consistently miss?

Mark Jeffrey explains how BitSec Subnet 60 addresses the growing gap between AI-driven code production and human-speed security audits. The project creates a decentralized marketplace for AI agents that hunt for vulnerabilities before hackers do.

The Scalability Gap "We need a 10x Superman to defend against 10x Lex Luthors."
  • Human Audit Failure: Manual reviews cannot scale with the massive volume of AI-generated code. Security becomes the primary bottleneck for shipping speed.
  • Asymmetric AI Warfare: AI tools help hackers find exploits faster than humans can patch them. Defense must move at machine speed to survive.
  • Continuous Code Scanning: BitSec agents perform perpetual audits rather than one-off checks. Vulnerabilities are caught the moment code changes.
The Benchmarking Engine "We give our AI Sherlock the same mystery to solve, knowing the answer in advance."
  • Historical Exploit Testing: BitSec Version 2 tests agents against a database of known historical smart contract failures. This proves an agent can find real world holes before it handles live code.
  • Generalization Over Specialization: AI models identify patterns across different languages and architectures without specific training. This removes the need for expensive vertical specialists.
The Defensibility of Hard Problems "Security is an open-ended task."
  • Open Ended Discovery: Unlike code generation which has a fixed output, security requires finding unknown unknowns. This makes the task harder to solve but more valuable once solved.
  • Economic Incentive Alignment: Miners earn rewards by outperforming others on public leaderboards. Competition drives the rapid improvement of security agents.

Actionable Takeaways:

  • 🌐 The Macro Transition: Security is moving from a periodic human service to a continuous machine-verified state.
  • ⚡ The Tactical Edge: Stress-test your current security stack by running it against historical exploit benchmarks.
  • 🎯 The Bottom Line: If you are not using AI to defend your code, you are already losing to the AI trying to break it.

Podcast Link: Click here to listen

Hello everybody and welcome to the very first hash rate of 2026. Today we're going to be covering BitSec subnet 60, which is a security subnet. So BitSec attacks the problem of cybersecurity.

So that means keeping people out of your servers if you're running a company. It also means keeping people out of your smart contract, keeping them from finding exploits. And in the age of AI, where AI empowers all of us but it also empowers the bad people, it creates 10x software engineers but it also creates 10x Lex Luthors. We need a 10x Superman. We need defense against that which also uses AI to look for security holes and find them before the 10x Lex Luther who's using AI to find those security holes does. That is what BitSec is doing.

Cybersecurity is a $200 billion annual market. That's what's spent on it. Yet the outcomes remain kind of crappy. The security spend is high. The time to audit things is slow, and exploit frequency continues to rise. In blockchain alone, billions are lost annually despite widespread use of traditional audits. And recently in the Bittensor ecosystem, one of our subnets it appears over the weekend suffered an exploit. I mean just highlights even more how important something like subnet 60 BitSec is.

Why is cybersecurity failing? Well the root issue is not lack of effort but lack of scalability. Human-led audits, they just can't keep up. You've got an AI world. It's just moving faster than the humans can move to find the problems before somebody else does. AI code generation is reaching mainstream adoption. In the Bittensor ecosystem, we have Ridges which is sort of coming up through the ranks right now. They're just in the process of releasing their Vibe coding or AI assisted software engineering product. But at large we have Anthropic, we have Claude, we have Cursor, everybody is using these AI code tools. AI code generation has taken off like a bat out of hell. But the correlary security checking by AI has not yet taken off.

How does subnet 60 BitSec attack this problem of security? Couple different ways. First of all, it continuously scans and rescans code bases. It's not just like a one-time hit and go away. It's your code is in perpetual audit, sort of like Bitcoin. Bitcoin exists in a state of perpetual public audit. Unlike Fort Knox, which hasn't been audited since the 50s, all of us can go look on the Bitcoin blockchain and see exactly what's going on, who owns what. There's no question. It's an open public perpetual audit at all times. This is sort of the same thing. BitSec is continuously scanning and rescanning code bases.

Furthermore, it can generalize across languages and architectures. It's not a specialist. Normally when you have a human audit, you have to have a human specialist because this is software, because this is AI, it already knows everything. It is just much better at finding things that a vertical siloed human is not going to find.

Thirdly, it improves through competition and benchmarking. Just like how Ridges is continually putting up competitions to improve its AI software engineering product, this BitSec subnet 60 is continually improving and benchmarking its security, its adversarial product. It's getting smarter and better in the same way that the AI coding software engineering products are as well.

Finally, it can operate at speeds and scales that humans just can't match. You'd need way more human, you need a greater number of humans. You'd need individual siloed knowledge of all the different types and flavors and languages and architectures that you're trying to penetrate test. You're not going to be able to hire that many people. The AI is just going to be able to move a lot faster and know a lot more things right out of the box.

Some early results from BitSec version one demonstrated that even relatively simple AI setups could rediscover real world exploits. Exploits that we know that we know from history were found. BitSec took their product and set it loose on that codebase knowing already in advance where the hole was and BitSec found it. This included high impact vulnerabilities that had already caused major losses. It passed the test. It would have found it.

In several cases, the exploits were detectable quickly using models that were not explicitly trained on those code bases. BitSec was able to prove that it had a model which was able to generalize. Even though it didn't have the specific training required for that to find that particular exploit, it still figured it out. You could think of this as sort of like a Sherlock Holmes, trying to solve the case. You test it by, you know in advance that Moriarty already did it, and he did it with the knife and the butler in the study. You set everything back up again and you say, "Okay, software Sherlock Holmes, solve the murder." And you know in advance what the answer is and Sherlock succeeds. Now you know Sherlock is a good detective. Same thing going on here with BitSec.

BitSec version one was a proof of concept. The primary goal was to answer a question. Can AI agents find real vulnerabilities in real code bases? The answer was yes. They proved that point. Version one surfaced multiple real world exploits across multiple ecosystems and highlighted a big industry gap. There's no objective benchmark to compare AI security performance in a way that maps to economic outcomes.

BitSec version two addresses this directly. Version two introduces an agent-based architecture where miners, Bittensor miners submit security agents that are then evaluated against real world smart contract audit benchmark. This benchmark is built up from historical audit challenges using open- source code bases and verified findings. Again, we know who committed the murder and we give our AI Sherlock the same mystery to solve, knowing the answer in advance. That's the way in which BitSec version two is benchmarking its security product.

As BitSec agents improve on benchmarks, they become more competitive in live audit challenges. They win more bug bounties. They establish credibility with paying customers and they create external proof of effectiveness via public leaderboards. This creates a direct feedback loop where better benchmark performance can lead to higher real world revenue and stronger demand for the product layer.

How is BitSec like and unlike Ridges? They both use agent-based submissions from miners. They both rely on real world benchmarks. Stuff we know in advance and we can it's a test that we know the answers to that we can measure against. It improves via competitive hill climbing dynamics and it aligns incentives through Bittensor emissions. Those are the similarities.

The problem that BitSec is solving is materially different. Security is kind of an open-ended task. Unlike code generation which has a fixed output, does it work or not, security is we don't know if there's a hole in here. Tell me if there's a hole. If you don't know in advance, it's kind of hard to grade that test, when even the teacher doesn't know what the answers are. The agents they have to discover unknown vulnerabilities across arbitrary architectures and threat models and then match or exceed human findings. This makes BitSec's task harder but if they crack it it's also a lot more defensible because this is a hard thing to do.

Success in this domain compounds trust credibility and economic value more strongly than incremental improvements in a generative task. Ridges has five very large competitors: Cursor, Anthropic, Gemini, OpenAI. The end user is the one paying and can assess the quality beyond benchmarks. Ridges, they've proven, all these other companies I just mentioned are making giant amounts of cash. There's absolutely a market here. There is product market fit, probably the best one in AI to date. BitSec doesn't really have clear competitors in the same way. There are no clear leaders yet in the AI security auditing field. BitSec is entering a new arena.

BitSec's immediate focus for Q1 is achieving state-of-the-art performance on smart contract audit benchmarks which this makes sense. Bittensor lives and breathes in this crypto ecosystem. Smart contract audits are a giant need, being able to find vulnerabilities in smart contracts. Again, as we saw this past weekend in the Bittensor ecosystem, but multiple times and at much larger scale across other smart contracts and other ecosystems, there have been many, many, many, many, many exploits. This is a very important problem to solve.

Once BitSec feels that they've reached reliability and performance, they'll move directly to productization. Just like Ridges sort of lived in a universe where they were creating theoretical they were basically benchmarking themselves against various tests so it's highly theoretical highly in the lab kind of thing before releasing a product to the real market. This is exactly what BitSec is doing. Once they feel like they've cracked the product then they're going to then they're going to release it.

BitSec is not just limited to blockchain obviously that going after smart contract audits is the first market and the most logical one. It's in the same ecosystem that BitSec is in. It makes complete sense. The same incentive flywheel can be extended to traditional web 2 security. Companies have nothing to do with blockchain. They just have firewalls and processes and things where they just want to make sure there's no back doors or holes in that. Traditional web two security penetration testing model jailbreaking so AI model jailbreaking can you trick it into doing things it shouldn't be doing infrastructure security or any domain where exploit discoverable is in some way measurable.

As AI continues to accelerate code production security also must become continuous automated and economically aligned and BitSec is positioning to become a central layer of trust for code auditing across ecosystems.

That's it. My name is Mark Jeffrey. This has been the first hash rate of 2026. We'll see you all next time.

Others You May Like