This episode reveals why the future of scalable AI depends on crypto's core principles, as Gensyn's co-founders break down their mission to build a verifiable, decentralized machine learning network.
The Founders' Origin Story: A Serendipitous Start
- Ben Fielding, with a Ph.D. in deep learning and neural architecture search, brought deep academic expertise.
- Harry Grieves, with a background in applied machine learning and econometrics, brought industry experience, particularly with the high costs of compute for large models.
- They met on the kickoff weekend, just before the global COVID-19 lockdown forced the entire program remote. This twist of fate led to an intense collaboration over 14-hour Zoom calls, solidifying their partnership.
From Privacy to Compute: The Evolution of Gensyn's Mission
- Gensyn's initial focus was on applying federated learning—a technique where AI models are trained across decentralized devices without moving the underlying data—to solve privacy challenges for financial institutions. Their early tagline was "models to the data instead of data to the models."
- They quickly realized this technology had a much broader application. The core problem wasn't just data silos, but also compute silos.
- Both founders had experienced the "compute problem" firsthand—Ben competing with DeepMind's resources from his university lab and Harry facing massive costs at a startup.
- This led to a pivotal shift: unlocking the "long tail of compute" by creating a network of decentralized devices.
The Core Challenge: Verifying Decentralized AI
- Harry Grieves explains the two approaches to verification: input-driven (proving a specific function was run on a specific input) and output-driven (checking if the result looks acceptable). For rigorous, experimental AI research, input-driven verification is essential.
- Modern AI, like generative models, is state-dependent, meaning the output of one step feeds into the next. This makes it difficult to parallelize tasks across many machines.
- Quote from Harry Grieves: "Our focus was how do you kind of game theoretically move the Nash equilibrium to someone actually doing the work you asked... we looked through the literature... and we basically shut every single door in the literature apart from one... and that was crypto."
The Problem of Probabilistic AI and the Need for Determinism
- Determinism is the principle that a given set of inputs will produce the exact same, bit-for-bit identical output every time. This is crucial for cryptographic verification.
- Ben Fielding explains that without determinism, you are left with comparing outputs to see if they are "close enough." This creates a "squishy margin for error" that can be exploited.
- Strategic Implication: Bad actors can exploit this non-determinism to perform "laziness attacks" (doing less work to save money) or, more sinisterly, "poisoning attacks" to subtly inject censorship or bias into a model that would go undetected by simple distance-based checks.
Gensyn's Three Core Beliefs for Building the Network
- Build from Primitives: Instead of verifying entire complex models (like Transformers), which change rapidly, Gensyn focuses on verifying the underlying mathematical primitives (e.g., matrix multiplications). This creates a robust and future-proof foundation.
- Verification Requires Determinism: To achieve true, trustless verification that can be recorded on a blockchain via a hash, you need absolute determinism. Probabilistic checks are insufficient for creating a final, unforgeable source of truth.
- The Future is Horizontal Scale: The current AI paradigm of vertical scaling (stacking more GPUs in a single data center) is hitting diminishing returns. Gensyn believes the next era is horizontal scaling—distributing computation across a vast, open network of devices, much like the MapReduce revolution did for big data.
Hallucinations vs. Determinism
- A model can be fully deterministic and still hallucinate—meaning it consistently makes up the same incorrect fact.
- Hallucinations often arise from the model trying to satisfy a user's request (e.g., providing a citation when it doesn't know one) and are a feature of current next-token prediction architectures.
- Researcher Insight: The problem of hallucinations is more related to model architecture and reward functions than the underlying computational integrity. Deterministic systems don't solve hallucinations but ensure that any output, hallucinated or not, is verifiably correct according to the model's process.
Gensyn's Tech Stack and Go-to-Market
- Base Protocol: The deepest layer, containing the verification games and cryptographic primitives. This is where their research on efficient on-chain and off-chain interactions is focused.
- Execution Layer: This includes a custom-built compiler to ensure bitwise-exact, deterministic results across different hardware—a massive engineering challenge in itself.
- Communication & Application Layer: This is where frameworks like RL Swarm (peer-to-peer reinforcement learning) and optimization strategies like NoLocal (a gossip-based training method that reduces communication overhead) operate.
- Strategic Focus: Gensyn is now focused on pushing this full stack to mainnet to serve three primary use cases: achieving massive scale, enabling applications that require inherent verification (e.g., AI-driven financial settlement), and building the substrate for the autonomous machine-to-machine economy.
The North Star: Augmentation, Autonomy, and a More Effective Humanity
- Autonomy: Machines take over tasks previously done by humans, scaling with electricity instead of biology.
- Augmentation: Humans remain at the center of a task, but their capabilities are enhanced by AI.
- Ben Fielding's Vision: "Put that premise behind every single technical interaction we have and you get a completely different system of technology that just makes us more effective and scales the parameter space of models across the entire world." This means moving from static, coded app interfaces to dynamic, personalized experiences driven by models that infer user intent.
Conclusion
This conversation highlights that the next frontier for AI is not just bigger models, but a fundamental shift from centralized vertical scaling to decentralized horizontal scaling. For investors and researchers, the key takeaway is that verifiable, deterministic computation is the critical enabler for this shift, representing a foundational investment opportunity.