In this episode, Ken, co-founder of Bitmind AI, unpacks the evolution of Bittensor’s Subnet 34, from a deepfake detector into a groundbreaking adversarial system poised to tackle digital identity and take on Worldcoin. He details the subnet’s new architecture, impressive growth metrics, and a bold vision for a decentralized Proof-of-Human.
The Reality Distortion Field
- “Generative AI was getting better and better, and really determining truth and understanding what is reality is part of what makes us human. It's just never been more blurry today.”
- “Reality is so crazy right now that it really makes it even more difficult. So you're starting to doubt even normal things.”
The core mission of Bitmind is to differentiate digital reality from fiction, a problem that has become mission-critical in an era of hyper-realistic generative content. The stakes are high; a fake image of an Iranian attack recently gained over a million views in hours, spreading disinformation like wildfire. Bitmind's tools are seeing significant traction in conflict-adjacent regions like Pakistan, Bangladesh, and India, which are currently their top three user countries.
Fighting Fire With Fire: The Adversarial Pivot
- “We are...flipping it on its head and implementing a new architecture... we're calling it the generative adversarial subnet... inspired from... generative adversarial networks (GANs).”
- “You're sucking the bad actors towards you and forcing them to train your detective model. So you're kind of... sucking the poison from the wound.”
Bitmind is radically redesigning its subnet into a "Generative Adversarial Subnet" (GAS). This new architecture pits two types of miners against each other: "Discriminators" who detect fakes and "Generative Miners" who are incentivized to create the most convincing fakes possible using state-of-the-art models. This clever design solves major hurdles like scalability and data privacy for enterprise clients while creating a powerful flywheel: it uses the network's incentives to source the world's best generative models to constantly train and harden its own detectors.
Killing the Orb: The Vision for Mind ID
- “We need to kill the orb... create a fully AI-native, decentralized, and open-source proof-of-human service which we'll call Mind ID.”
- “Would you trust Russia if they built the ultimate deepfake detection solution?... The answer is no. You can't trust this problem to a centralized entity.”
Bitmind's endgame extends beyond deepfake detection to a direct challenge against Worldcoin. The vision is "Mind ID," a decentralized, software-based Proof-of-Human service that sidesteps the single-point-of-failure and ethical concerns of Worldcoin’s hardware "orb." By building a dynamic, open-source system that lives on user devices and improves over time, Bitmind aims to provide a more trustworthy and secure standard for digital identity, with plans to roll out the first version in Q4 2024.
Key Takeaways:
Bitmind’s journey shows how a focused commodity can evolve into a sophisticated, multi-faceted platform. Their new adversarial architecture is a powerful solution to the cat-and-mouse game of AI detection, creating a system that gets stronger as its opponents do.
- Adversarial-by-Design is the Future: The most robust AI systems will be those trained in a competitive, adversarial environment. Bitmind’s GAS architecture operationalizes this, incentivizing miners to act as both red team and blue team to build the world’s best detector.
- Software Will Eat the Orb: Bitmind is betting that a dynamic, open-source, software-based Proof-of-Human can defeat a static, centralized, hardware-based solution. Their approach avoids single points of failure and corporate control, offering a more resilient path to digital identity.
- From Commodity to Revenue: Bitmind has a clear path to monetization, projecting $1M in monthly recurring revenue within 12 months of launching its paid services. This strategy aims to achieve profitability and mitigate token sell pressure within six months, providing a model for other subnets to follow.
For further insights and detailed discussions, watch the full episode: Link

This episode reveals how Bitmind AI is evolving from a simple deepfake detector into a sophisticated adversarial network designed to secure digital reality and challenge centralized identity solutions like Worldcoin.
The Genesis of Bitmind: Differentiating Reality from Fiction
- Ken notes that while this was a concern, the more immediate and visceral impact has been seen in recent global conflicts and the general social media experience.
- He describes the challenge of “doom scrolling” and needing to verify content, highlighting the increasing difficulty in distinguishing truth from fiction.
- Jake, the host, adds a crucial insight about social media algorithms, noting they often amplify “rage bait”—content that triggers strong negative emotions—making the spread of misinformation even more potent. Ken agrees, stating, “The whole advertising space is fascinating. I mean, it really does prey on... your emotional bandwidth.”
A Real-World Test: The Iranian Conflict
- The image gained over a million views, and platforms like Grok initially failed to identify it as AI-generated.
- Bitmind's tools were among the first to correctly detect the image as fake, demonstrating its real-world effectiveness. This event led to a significant spike in usage for Bitmind's applications.
- Ken highlights a critical nuance: reality has become so strange that even real events, like a newscaster being bombed mid-broadcast, are often mistaken for fakes. This underscores the importance of verifying content as “real,” not just flagging it as “fake.”
Bitmind's Traction and Team
- User Growth: Bitmind is experiencing 9x month-over-month user growth, with application requests reaching approximately 250,000 per day.
- Global Reach: The most significant user activity comes from the Middle East and Southeast Asia, with Pakistan, Bangladesh, and India being the top three countries. Ken attributes this to the proximity to conflict zones, where reliable information is critical.
- Miner Ecosystem: The subnet has attracted top-tier talent, including teams from Google DeepMind and early Midjourney contributors. Ken emphasizes that this level of talent accrual would be impossible without Bittensor's decentralized competitive framework.
The Original Subnet Architecture (Subnet 34)
- Miners: Train models to differentiate between real, AI-generated, and semi-synthetic content.
- Validators: Generate AI data, retrieve real data, augment it to prevent gaming, and then challenge and score the miners.
- A key challenge was creating a balanced data distribution. Early on, AI prompt datasets skewed heavily toward specific themes (e.g., “women in bikinis,” “capybaras on Mars”), allowing miners to differentiate based on content alone. To counter this, Bitmind developed a sophisticated data pipeline that uses a VLM (Vision-Language Model) to generate prompts from real images, ensuring a more even content distribution.
Performance and Current Challenges
- Accuracy: In out-of-distribution tests (using data miners haven't been trained on), the models achieve 88% accuracy for images and 62% for video.
- The Video Gap: Ken explains the lower video accuracy is due to the gap between state-of-the-art closed-source video models (like Sora) and available open-source models. Validators can't use closed-source models to generate training data, limiting the miners' ability to learn to detect the most advanced fakes.
- Key Challenges:
- Data Collection: Keeping up with the latest generative models is a constant battle.
- Scale: The current validator-miner interface is not built to handle the goal of 100,000 to 1 million daily active users.
- Data Privacy: Enterprise clients are hesitant to have their data distributed across hundreds of anonymous miner servers, a common hurdle for many subnets.
The Pivot: A Generative Adversarial Subnet (GAS)
- To address these challenges, Bitmind is implementing a new architecture inspired by GANs (Generative Adversarial Networks), a machine learning framework where a generator and a discriminator compete to improve each other. This new design, live on testnet, flips the subnet on its head.
- New Miner Roles:
- Discriminator Miners: Submit their detection models to a secure storage system. The validators run these models, solving the privacy and scaling issues.
- Generative Adversarial Miners: Are incentivized to produce the highest-quality fake data possible, specifically data that can fool the discriminator models.
- Strategic Benefits:
- Scalability & Privacy: By hosting the discriminator models, Bitmind can create a highly scalable, low-latency, and privacy-preserving inference service for enterprise users.
- State-of-the-Art Data: The new design incentivizes generative miners to use the best closed-source APIs (like Sora, Midjourney, Kling) to create their adversarial examples. This allows Bitmind to “suck the poison from the wound,” as Jake puts it, by using the most advanced generative tools to train the world's best detectors.
The Vision: Beyond Deepfakes to Proof of Human
- The conversation culminates with Bitmind's long-term vision: to expand beyond deepfake detection and build a decentralized, AI-native Proof of Human service called Mind ID.
- This service aims to be a direct competitor to centralized, hardware-based solutions like Worldcoin. Ken criticizes Worldcoin for its single point of failure (the orb), its centralized control, and its questionable data collection practices.
- Mind ID's Approach:
- It will be a software-based system that uses a variety of classification challenges (human vs. non-human, liveness checks) to verify identity.
- Biometric data would be stored in verifiable credentials on a user's own device, ensuring privacy and user control.
- The underlying Bittensor subnet would continuously harden the system, with generative miners constantly trying to find and patch vulnerabilities.
- Ken states, “You can't trust this problem to a centralized entity, especially not the companies that are creating the generative AI.”
Roadmap and Monetization
- Roadmap: The GAS architecture will roll out in Q3, followed by a major mobile app release. The initial version of Mind ID is targeted for Q4 2024.
- Monetization:
- Consumer: A paid subscription for the advanced mobile application.
- Enterprise: Direct integrations with social media platforms and businesses to add a security layer.
- SaaS: The ultimate goal is to offer Mind ID and deepfake detection as a service that any developer can integrate into their applications.
- Bitmind projects it can reach $1 million in monthly recurring revenue (MRR) within 12 months of launch, crossing the threshold to mitigate token sell pressure from emissions within six months.
Conclusion
Bitmind's strategic pivot to a generative adversarial model marks a significant evolution, aiming to solve critical scaling and data privacy issues. For investors and researchers, this shift, coupled with the ambitious goal of creating a decentralized Proof of Human service, positions Bitmind as a key project to watch in the battle for digital truth.