The Rollup
May 14, 2025

Why AI Must Stay Neutral: Infrastructure vs. Revenue Curve

This episode dives into the critical argument for AI to remain fundamentally neutral and accessible, positioning models as public infrastructure rather than tools skewed by revenue pursuits. The speaker, reflecting on pivotal moments like Stable Diffusion's licensing shift, champions a future where AI's core remains open for universal benefit.

The Imperative of Credibly Neutral AI Infrastructure

  • "Ideally what we want is models to be infrastructure... Infrastructure typically needs to be non-rivalrous and non-exclusive."
  • "Everyone should have access to a credibly neutral base AI and they can do what they want with it because if you've got up to high school credibly neutral or transparent then you can adjust it."
  • AI models should be treated as foundational public infrastructure—open to all, without restriction or competition for access. Think of it as digital roads and bridges.
  • A healthy AI ecosystem requires a spectrum: from opinionated models to "credibly neutral" ones. This neutrality, akin to what Ethereum strives for, allows users to safely customize a transparent base AI.
  • Access to a "high school" level of transparent, neutral AI empowers users to tailor models for specialized, innovative applications without hidden biases or agendas baked into the core.

Dodging the "Revenue Evil Curve"

  • "There is a great essay by Vitalik—this smart guy—on the 'revenue evil curve.' He says that infrastructure typically needs to be non-rivalrous and non-exclusive. Once you start introducing revenue components, it can rapidly turn evil because you make compromises."
  • "We went from fully open source at Stable Diffusion to having a bit of a license, and that messed things up, and that's one of the reasons that I decided to start again."
  • Vitalik Buterin's "revenue evil curve" warns that embedding revenue generation into core infrastructure leads to compromises that corrupt its purpose.
  • The speaker points to Stable Diffusion's move from a fully open-source model to a licensed one as a real-world example of this "evil curve" in action, a misstep that prompted a personal reset to prioritize openness.
  • Keeping foundational AI non-commercial is key to preserving its integrity and ensuring it serves the broadest possible good, rather than narrow financial interests.

Curriculum Learning & Democratizing Specialized AI

  • "The way the models are trained is again very similar to a graduate. We have this thing called curriculum learning whereby we take massive data first and then we make it more and more and more specialized."
  • "Once you have a government AI, a healthcare AI, it can then proliferate around the world because it's just a few gigabytes that can do the work."
  • AI models undergo "curriculum learning," starting with vast, general datasets and progressively specializing—much like human education from general knowledge to specific expertise (e.g., creating a specialized "roly polol" from a general model).
  • This process, built upon a credibly neutral foundation, allows for the efficient creation of highly specialized AIs.
  • These specialized models, being just a few gigabytes, can easily proliferate globally, enabling widespread adoption of tailored AI for sectors like government and healthcare, democratizing advanced AI capabilities.

Key Takeaways:

  • The soul of AI development hinges on its foundational layers remaining open, neutral, and accessible to all, much like critical public infrastructure. Chasing revenue at this core level risks derailing AI’s potential for universal benefit, leading to a future where AI serves few, not many.
  • Neutrality is Non-Negotiable: Foundational AI must be credibly neutral and non-exclusive, acting as open infrastructure for everyone.
  • Shun the Revenue Siren: Embedding profit motives into core AI infrastructure risks a Faustian bargain, leading down Vitalik's "revenue evil curve" and compromising openness, as seen with Stable Diffusion's licensing shift.
  • Open Base, Specialized Bloom: A transparent, neutral AI foundation is the launchpad for a global explosion of compact, specialized AI applications that can address diverse, critical needs.

For further insights and detailed discussions, watch the podcast: Link

This episode explores the critical need for AI models to achieve credible neutrality, positioning them as open infrastructure rather than revenue-driven entities, and the profound implications for the future of AI development and accessibility.

The Nature of Thought and Credible Neutrality

  • The discussion opens by pondering the future of original thought, with the speaker asserting that the "space of thoughts is massive."
  • This introduces the concept of credible neutrality: the principle that a system or platform should operate without discriminating for or against any specific users or outcomes. The speaker notes this has been a complex goal for entities like Ethereum, which aimed for such neutrality.
  • A key takeaway for AI development is the need for a spectrum of models, ranging from highly opinionated to credibly neutral, to serve diverse applications and user requirements.

AI Model Training: From Generalization to Specialization

  • The speaker, drawing from experience in AI development, explains that AI models are often trained using curriculum learning. This is a machine learning strategy where models first process vast, general datasets and are then progressively trained on more specific and complex information.
  • "We have this thing called curriculum learning whereby we take a massive data first and then we make it more and more and more specialized."
  • This method allows for the creation of specialized AI, such as the "roly polol" example mentioned by the speaker, which represents a focused application derived from a more generalized foundational model.
  • For Crypto AI investors and researchers, understanding this layered training approach is vital for assessing the adaptability, potential biases, and niche applicability of emerging AI models.

AI as Infrastructure: Vitalik Buterin's "Revenue Evil Curve"

  • A central argument is that foundational AI models should be treated as public infrastructure.
  • The speaker references an essay by "Vitalik this smart guy" (Vitalik Buterin, co-founder of Ethereum) concerning the "revenue evil curve." This concept suggests that infrastructure, ideally non-rivalrous (use by one doesn't prevent use by another) and non-exclusive (access cannot be easily denied), can become detrimental or "evil" when it prioritizes revenue generation.
  • Such prioritization can lead to compromises in neutrality and accessibility, a critical consideration for investors evaluating the long-term integrity and community alignment of AI projects, especially those in the decentralized space.

The Perils of Compromising Openness: The Stable Diffusion Example

  • The speaker uses the example of Stable Diffusion, a well-known open-source text-to-image generation model, to illustrate the negative consequences of shifting away from open principles.
  • "We went from fully open source at stable diffusion to having a bit of a license and that messed things up and that's one of the reasons that I decided to start again."
  • This move towards licensing is framed as a detrimental step that undermined the project's ethos, prompting the speaker's decision to pursue new initiatives with a renewed commitment to openness.
  • This serves as a potent warning for the Crypto AI ecosystem: deviating from core open-source values can alienate communities and fragment development efforts, impacting investor confidence and project sustainability.

The Vision: Credibly Neutral Base AI for Universal Access

  • The speaker strongly advocates that "Infrastructure should be available to everyone."
  • The ideal scenario involves universal access to a credibly neutral base AI. This foundational AI, transparent in its construction and knowledge base (likened to a "high school" level of general, unbiased information), would act as a common, trustworthy starting point.
  • From this base, individuals and organizations could then build or fine-tune specialized applications.
  • For researchers, this highlights the importance of developing robust, verifiable, and genuinely neutral foundational AI. For investors, it signals potential in projects that authentically commit to this open infrastructure paradigm, which aligns with crypto's decentralization goals.

Proliferation of Specialized AI from a Neutral Base

  • The existence of a credibly neutral and transparent foundational AI could significantly accelerate the development and deployment of specialized AI applications.
  • The speaker posits that if a core "education AI" is built upon such a neutral base, its development by one entity (be it a company or organization) could then be leveraged globally.
  • This foundational approach allows for the efficient creation and dissemination of AIs for diverse sectors like government or healthcare, as these specialized models would be relatively compact ("just a few gigabytes") and easily adaptable.
  • This vision offers a pathway to rapid, decentralized innovation, a key objective for Crypto AI investors and researchers looking for scalable and widely accessible technological advancements.

The episode underscores that AI's future hinges on establishing credibly neutral, open infrastructure to avoid pitfalls of revenue-driven models. Crypto AI investors and researchers should prioritize projects fostering transparent, accessible foundational AI, as this will likely drive sustainable innovation and widespread adoption.

Others You May Like