The People's AI
April 3, 2025

How AI Will Evolve (and Decentralize): With MIT Media Lab's Ramesh Raskar

MIT Media Lab Associate Professor Ramesh Raskar outlines why decentralized AI isn't just a rival to Big Tech AI, but its natural, inevitable evolution, moving from a necessary "Mainframe era" towards a more innovative and equitable "PC era" and beyond.

AI's Evolutionary Leap: From Mainframes to Decentralization

  • "I like to compare this to the Mainframe era of AI versus the PC era of AI... it's the same thing with AI. We're starting with the Mainframe mindset... and then we're going to switch over to Edge AI."
  • "He shows decentralized AI not as a rival to centralized AI but rather as a natural evolution... as the inevitable place where this Tech had to begin and now it's time to evolve."
  • Centralized AI was a crucial first step, concentrating talent, data, and compute to kickstart progress, much like computing mainframes.
  • However, this centralizes responsibility for ethics and innovation roadmap decisions, burdening the central players and limiting broader participation.
  • The evolution mirrors the shift from mainframes to PCs, pushing compute and innovation to the edges, fostering a more democratic and economically distributed AI landscape, moving through phases from PC AI → Intranet of Agents → Internet of Agents → Web of Agents.

Building the Decentralized Future: The Four Pillars

  • "We think there are four main problems we need to solve. One is how do we work across data silos... privacy preserving machine learning. The second pillar is incentivization... incentive engineering... where the web 3 promise really shines. The third one is verifiability and proof of work... And the fourth pillar I would call it a special UX... a crowd ux."
  • Privacy: Goes beyond consent to "no peak privacy," ensuring data remains inaccessible even internally, potentially using techniques like Trusted Execution Environments. Federated Learning alone is insufficient due to its centralized nature.
  • Incentives: Leverages Web3 primitives (tokens, smart contracts) for transparently engineering participation and fairly valuing contributions of data, compute, and model inference.
  • Verifiability: Requires cryptographic proofs (proof of data, training, inference) to establish trust and authenticity in a decentralized system, ensuring users get the AI quality they expect.
  • Crowd UX: Needs new interfaces enabling discovery, orchestration, and collective awareness among agents and users, akin to Google Maps showing real-time traffic generated by the crowd. Vana is highlighted as a project tackling all four pillars simultaneously.

The Emergent Web of Agents & Path Forward

  • "We're going to move from the internet of AI agents to the web of AI agents... we're going to really have an experience that's completely difficult to imagine."
  • "AI agents are going to do the same thing [as humans]: they'll go socialize on our behalf, they will learn on our behalf, and they'll make money on our behalf."
  • The future envisions billions or trillions of AI agents interacting, forming a "Web of Agents" that requires new protocols and experiences, potentially emerging dynamically rather than through rigid standards.
  • These agents will form a complex digital economy, engaging in social activities, continuous learning ("agent school"), and performing labor, necessitating decentralized coordination.
  • Adoption hinges on buy-in from established tech players and platforms, alongside the decentralized community focusing rigorously on privacy, verifiability, and transparent, non-scammy incentives.

Key Takeaways:

  • Decentralized AI is framed not as an antagonist to centralized systems, but as the logical next phase, unlocking broader innovation and economic participation. Building this future requires a holistic approach addressing privacy, incentives, trust, and user experience simultaneously.
  • Evolve, Don't Fight: View decentralized AI as the natural evolution from the necessary "Mainframe" stage of centralized AI, fostering collaboration over conflict.
  • Master the Four Pillars: Success requires simultaneously solving for true privacy, Web3-powered incentives, cryptographic verifiability, and novel "crowd UX" interfaces.
  • Build the Agent Economy: Prepare for a future where autonomous agents socialize, learn, and earn, demanding decentralized infrastructure for this new digital labor market.

Link: https://www.youtube.com/watch?v=Dkk4GZw-32I

This episode features MIT's Ramesh Raskar reframing decentralized AI not as a rival to centralized systems, but as the inevitable next stage in AI's evolution, outlining a strategic roadmap for investors and researchers navigating this shift.

Meet Ramesh Raskar: An Established Voice for Decentralized AI

The conversation introduces Ramesh Raskar, Associate Professor at MIT Media Lab, highlighting his extensive background in AI research, spanning machine learning, imaging, health, robotics, and democratic ventures. Unlike many voices prominent only within crypto circles, Raskar brings established academic credibility from the mainstream AI world. He has experience at Google and Facebook, underscoring his deep understanding of both centralized and potentially decentralized systems.

The Genesis of Decentralized AI Thinking

Raskar shares that his conviction in decentralized AI isn't recent; it originated nearly a decade ago during non-profit work in India. Observing challenges around talent accessibility, digital rights awareness for the underserved, and the lack of economic participation for those outside the core AI revolution sparked key research ideas. This experience highlighted the need for systems where individuals could maintain control and benefit economically from their data and participation in AI, laying the groundwork for his long-term focus on decentralization well before it became a popular topic.

Centralized AI: The Necessary "Mainframe Era"

Raskar compellingly argues that the current dominance of centralized AI was a necessary starting phase, drawing an analogy to the history of computing. He compares today's large AI models run by major tech companies to the "Mainframe era," where computing power was concentrated. This initial centralization, he notes, was crucial for getting the "initial flywheel going" and making rapid early progress by consolidating talent, data, and compute resources.

The Evolution, Not Revolution: From Mainframes to the "PC Era" of AI

Pivoting from the mainframe analogy, Raskar positions decentralized AI as the natural evolution, akin to the shift from mainframes to the "PC era" of computing in the 80s and 90s. He suggests we are moving towards "Edge AI" or "Edge Compute," where AI processing occurs more locally on user devices. "Let's put the Computing at the edge and that completely changed the way we think about the internet," Raskar states, applying the same logic to AI's trajectory. This shift promises greater innovation at the edges, more responsible AI through distributed checks and balances, and broader participation in shaping AI's future.

Beyond the PC Era: The Future Stages of AI Evolution

  • Intranet of AI: Agents communicate within walled gardens (trusted, known entities).
  • Internet of AI: Agents interact in an open, potentially untrusted environment, facing challenges of identity and security – this is the core decentralized AI world.
  • Web of AI Agents: Analogous to the shift from the basic internet (FTP, Gopher) to the World Wide Web (HTTP, HTML, URL, Browser), this phase involves richer, more complex interactions and experiences between agents, moving beyond simple transactions. Raskar notes his research focuses heavily on this "Web of Agents" future.

Limitations of Centralized AI and the Push Towards Decentralization

  • Input Drivers: The rise of powerful Edge AI capabilities on local devices and the potential for trillions of small, self-contained AI agents necessitate a decentralized architecture.
  • Output Goals: Centralized players reluctantly become arbiters of ethical standards and innovation roadmaps (e.g., content generation controversies). Decentralization diffuses this responsibility. More critically, it enables a true "economy of digital labor," where individual agents, representing users like Jeff or Ramesh, can perform tasks and earn value based on their capabilities and the data/knowledge they represent. Raskar argues economic forces naturally push away from centralized "spoken hub" models towards distributed graph structures as this agent economy matures.

The Four Pillars of Decentralized AI

  • 1. Privacy-Preserving Machine Learning Across Data Silos: Moving beyond basic Federated Learning (a technique where models train on local data without the data leaving the device, but often still coordinated centrally), this requires sophisticated methods to enable computation across distributed data sources without centralizing or exposing raw data (achieving "no peak privacy").
  • 2. Incentive Engineering: Leveraging Web3 concepts (like tokens and smart contracts, minus the speculative excess) to design fair and transparent mechanisms for valuing data contributions, compute power, and model inference. This answers the "why participate?" question for users and their agents.
  • 3. Verifiability and Proofs: Establishing trust in a trustless environment. This involves cryptographic methods like Proof of Work (a concept originating in Bitcoin to validate transactions, here broadened) to create verifiable proofs of data provenance, model training integrity, and inference accuracy (Proof of Data, Proof of Training, Proof of Inference).
  • 4. Specialized Crowd UX: Designing user interfaces and experiences beyond simple dashboards. This "Crowd UX" needs to handle discovery (finding the right agents/services), orchestration (managing agent teams), and provide contextual awareness, similar to how Google Maps shows real-time traffic (other users' data) alongside directions.

Addressing Ecosystem Complexity and Standards

Jeff raises the challenge of the current fragmented decentralized AI landscape, with numerous projects building point solutions. Raskar acknowledges the seeming chaos but cautions against getting bogged down in premature standardization efforts, unlike traditional software development. He suggests that AI agents, being intelligent, will dynamically negotiate protocols and standards during interactions, much like humans adjust their communication style. "We should not be bogged down by need to have common standards and protocols... they will emerge in interactions," he advises, emphasizing that the rapid pace of AI innovation makes rigid, top-down standards impractical. He encourages projects to explore the "blue ocean" opportunities beyond just decentralized training and agent communication protocols.

Vana as a Case Study: Tackling the Four Pillars

Raskar highlights Vana (the podcast partner) as an example of a project thoughtfully addressing all four pillars simultaneously, contrasting this with approaches that centralize user data first. He emphasizes Vana's focus on:

  • Privacy: Ensuring data isn't centralized, potentially using techniques like trusted execution environments, aiming for "no peak privacy."
  • Incentives: Building mechanisms to value data contributions fairly (referencing the VRC-20 data token standard).
  • Verifiability: Incorporating proofs for data and computation integrity.
  • UX: While acknowledging UX is evolving, the foundational work on the other pillars enables a more robust user experience later.

This comprehensive approach, Raskar argues, is crucial for building stable and trustworthy decentralized AI systems, like a "four-leg stool."

The Vision: The "Web of Agents" Experience

Painting a picture of the future "Web of Agents," Raskar envisions a world where user agents act semi-autonomously:

  • Agent Enhancement: Agents might attend "agent school" to learn new skills or concepts, requiring users to invest in their agents' capabilities.
  • Agent Management: Agents might need HR-like functions for behavior or career paths, or even "repair shops" for upgrades (e.g., moving from an old OS to a new "VanaOS").
  • Beyond Transactions: Agent interactions will encompass social connections, continuous learning, and performing labor (earning value), mirroring the multifaceted nature of human life.

This complex economy, Raskar suggests, mirrors the evolution of the web itself – from internet-native services (search, email) to digitizing real-world experiences (Uber, Airbnb) and finally tackling entrenched industries (health, agriculture), with decentralized AI accelerating this progression.

Overcoming the Hurdles: Challenges to Decentralized AI Adoption

  • Incumbent Adoption: Controversially, he suggests significant traction may require established tech players (like IBM backing Linux historically) to embrace decentralization, potentially as a hedge or strategic evolution, rather than viewing it purely as a battle.
  • Platform Integration: Gaining adoption requires integration with widely deployed platforms (OS providers like Microsoft/Apple, or communication apps like WhatsApp/Signal) to leverage existing distribution channels.
  • Community Focus: The decentralized AI community itself needs to rigorously prioritize the core pillars: true user data privacy/security, robust verifiability, and transparent, non-scammy incentive models.

Conclusion

Raskar's evolutionary perspective positions decentralized AI as the logical next step beyond centralized systems. Investors and researchers must evaluate projects holistically across his four pillars (privacy, incentives, verification, UX) to identify robust, future-proof opportunities in this emerging, potentially transformative agent-driven economy.

Others You May Like