This episode explores the critical need for AI models to achieve credible neutrality, positioning them as open infrastructure rather than revenue-driven entities, and the profound implications for the future of AI development and accessibility.
The Nature of Thought and Credible Neutrality
- The discussion opens by pondering the future of original thought, with the speaker asserting that the "space of thoughts is massive."
- This introduces the concept of credible neutrality: the principle that a system or platform should operate without discriminating for or against any specific users or outcomes. The speaker notes this has been a complex goal for entities like Ethereum, which aimed for such neutrality.
- A key takeaway for AI development is the need for a spectrum of models, ranging from highly opinionated to credibly neutral, to serve diverse applications and user requirements.
AI Model Training: From Generalization to Specialization
- The speaker, drawing from experience in AI development, explains that AI models are often trained using curriculum learning. This is a machine learning strategy where models first process vast, general datasets and are then progressively trained on more specific and complex information.
- "We have this thing called curriculum learning whereby we take a massive data first and then we make it more and more and more specialized."
- This method allows for the creation of specialized AI, such as the "roly polol" example mentioned by the speaker, which represents a focused application derived from a more generalized foundational model.
- For Crypto AI investors and researchers, understanding this layered training approach is vital for assessing the adaptability, potential biases, and niche applicability of emerging AI models.
AI as Infrastructure: Vitalik Buterin's "Revenue Evil Curve"
- A central argument is that foundational AI models should be treated as public infrastructure.
- The speaker references an essay by "Vitalik this smart guy" (Vitalik Buterin, co-founder of Ethereum) concerning the "revenue evil curve." This concept suggests that infrastructure, ideally non-rivalrous (use by one doesn't prevent use by another) and non-exclusive (access cannot be easily denied), can become detrimental or "evil" when it prioritizes revenue generation.
- Such prioritization can lead to compromises in neutrality and accessibility, a critical consideration for investors evaluating the long-term integrity and community alignment of AI projects, especially those in the decentralized space.
The Perils of Compromising Openness: The Stable Diffusion Example
- The speaker uses the example of Stable Diffusion, a well-known open-source text-to-image generation model, to illustrate the negative consequences of shifting away from open principles.
- "We went from fully open source at stable diffusion to having a bit of a license and that messed things up and that's one of the reasons that I decided to start again."
- This move towards licensing is framed as a detrimental step that undermined the project's ethos, prompting the speaker's decision to pursue new initiatives with a renewed commitment to openness.
- This serves as a potent warning for the Crypto AI ecosystem: deviating from core open-source values can alienate communities and fragment development efforts, impacting investor confidence and project sustainability.
The Vision: Credibly Neutral Base AI for Universal Access
- The speaker strongly advocates that "Infrastructure should be available to everyone."
- The ideal scenario involves universal access to a credibly neutral base AI. This foundational AI, transparent in its construction and knowledge base (likened to a "high school" level of general, unbiased information), would act as a common, trustworthy starting point.
- From this base, individuals and organizations could then build or fine-tune specialized applications.
- For researchers, this highlights the importance of developing robust, verifiable, and genuinely neutral foundational AI. For investors, it signals potential in projects that authentically commit to this open infrastructure paradigm, which aligns with crypto's decentralization goals.
Proliferation of Specialized AI from a Neutral Base
- The existence of a credibly neutral and transparent foundational AI could significantly accelerate the development and deployment of specialized AI applications.
- The speaker posits that if a core "education AI" is built upon such a neutral base, its development by one entity (be it a company or organization) could then be leveraged globally.
- This foundational approach allows for the efficient creation and dissemination of AIs for diverse sectors like government or healthcare, as these specialized models would be relatively compact ("just a few gigabytes") and easily adaptable.
- This vision offers a pathway to rapid, decentralized innovation, a key objective for Crypto AI investors and researchers looking for scalable and widely accessible technological advancements.
The episode underscores that AI's future hinges on establishing credibly neutral, open infrastructure to avoid pitfalls of revenue-driven models. Crypto AI investors and researchers should prioritize projects fostering transparent, accessible foundational AI, as this will likely drive sustainable innovation and widespread adoption.