The People's AI
July 2, 2025

Who Should Regulate AI? A Civil Debate on the Future of AI Policy

This episode brings together two sharp minds for a civil debate on AI policy: Justin Hendricks, CEO of Tech Policy Press, who emphasizes the need for accountability, and Jeff Amo, COO of Jensen, who champions open-source innovation. They tackle the defining question of our time: how do we harness AI's benefits without succumbing to its risks?

The Regulatory Battleground: Federal vs. State

  • "I am in favor of the idea that it would be bad if we ended up with a patchwork of 50 different state-level frameworks... when you layer on regulation, you're implicitly biasing towards the big incumbent companies."
  • "This could potentially be one of the most impactful things that's happened on tech policy in the United States in possibly 30 years."
  • A proposed federal bill aims to create a moratorium on state-level AI legislation. Proponents argue this prevents a chaotic and unworkable "patchwork" of 50 different compliance regimes, which would stifle startups and entrench incumbents like Google and OpenAI who can afford the legal overhead.
  • Opponents warn that this move could gut consumer protections. With Congress moving slowly on tech, state laws often serve as a crucial backstop for addressing harms like algorithmic discrimination. Removing this layer of accountability is seen as a massive, potentially irreversible, shift in tech policy.

The Open Source Dilemma: Competition vs. Control

  • "Without access to those open-source models, startups like us just couldn't compete... I actually view open source as fundamentally safer than closed source because you're shining sunlight on the entire system."
  • Open-source models are positioned as the lifeblood of competition. For startups, they are essential tools to experiment and build without having to create frontier models from scratch, preventing a future where the entire ecosystem is controlled by a few closed-source giants.
  • The debate flips the conventional security narrative. Instead of being riskier, open source is argued to be fundamentally safer. By exposing models to the global community, bugs and vulnerabilities are found and hardened more effectively than within a single, secretive company. This "many eyes" approach is battle-tested, much like the Linux operating system.

The Unseen Costs: AI's Environmental Footprint

  • "We seem to be sacrificing whatever sliver of hope we had to reach our emissions and climate goals for an artificial intelligence boom that is speculative."
  • The AI boom carries a staggering environmental price tag. The electricity needed to train frontier models has doubled every year for the past decade—an unsustainable curve that is leading to the reopening of coal plants and threatening climate goals.
  • This infrastructure buildout, one of the largest in decades, is happening at a breakneck pace with trillions of dollars forecasted for investment. One proposed market solution is to move away from massive, centralized data centers and toward decentralized compute networks that can better distribute the energy load.

Key Takeaways:

  • The core tension in AI policy is a balancing act between unleashing innovation and establishing guardrails. While both sides agree on the goal of avoiding bad outcomes, the debate centers on whether to proactively regulate the technology itself or to rely on existing laws to punish harmful applications after the fact.
  • Over-regulation is a gift to incumbents. A complex web of state laws or premature federal rules could inadvertently hand the future of AI to a handful of giants by crushing the startups needed to challenge them.
  • Open source is the competitive frontier. It’s not just a development philosophy; it’s a strategic weapon for startups to survive and for the West to out-innovate geopolitical rivals without relying on ineffective protectionist policies.
  • AI's energy appetite is exponential and unsustainable. The environmental cost is a non-negotiable part of the equation, demanding solutions that move beyond simply building more massive, power-hungry data centers.

For further insights and detailed discussions, watch the full episode: Link

This episode dissects the fierce debate over AI regulation, exploring the critical tension between fostering permissionless innovation and establishing safeguards to prevent societal and environmental harm.

Speaker Introductions and Core Philosophies

  • Justin Hendricks, CEO and Editor-in-Chief of Tech Policy Press, approaches AI regulation through a societal lens. His focus is on how technology impacts democracy, equity, and environmental sustainability, expressing a healthy skepticism of both concentrated state and corporate power.
  • Jeff Amico, COO of Gensyn, a protocol for decentralized machine learning computation, represents a pro-market and pro-open-source viewpoint. Jeff is fundamentally optimistic about AI's benefits but voices a primary concern: that premature or excessive regulation will stifle innovation and lead to the technology being controlled by a few large, incumbent companies.

The "Who" of Regulation: A Federal vs. State Debate

  • Jeff Amico argues that it is premature to create sweeping new laws, suggesting that existing legal frameworks like consumer protection laws, tort liability, and anti-fraud doctrines are sufficient for now. He believes the court system is the ideal initial "testing ground" to identify real-world harms before codifying new rules.
  • Justin Hendricks notes that AI policy is already being formed at multiple levels globally—international, national, state, and even corporate self-governance. He focuses on the recent US federal bill passed by the House, which includes a controversial provision that could create a moratorium on new state-level AI laws. A moratorium is a temporary prohibition of an activity.
  • Justin warns this could be profoundly impactful, stating, "this could potentially be one of the most impactful things that's happened on tech policy in the United States in possibly 30 years." He explains that civil society groups and even some states' rights advocates worry this would remove a critical backstop for holding companies accountable, especially since Congress has been slow to act on tech regulation.
  • Strategic Implication: Crypto AI investors must closely monitor the federal preemption debate. A federal moratorium could create a more uniform but potentially less restrictive regulatory environment in the US, favoring large-scale projects but possibly removing avenues for recourse at the state level.

The "What" of Regulation: Application vs. Foundation

  • Jeff Amico advocates for regulating at the "application layer"—the specific use of AI in a business or service. For example, an AI therapist chatbot should be subject to the same kind of regulations that human therapists are. This approach targets specific harms without restricting underlying research and development of the models themselves.
  • Justin Hendricks agrees that addressing harms like fraud and algorithmic discrimination is critical. However, he points out the political reality that federal law currently lacks basic protections, such as a comprehensive privacy bill, making it difficult to address AI-driven abuses effectively. He argues that state laws often provide necessary protections that a federal moratorium would eliminate.
  • Actionable Insight: The distinction between application-layer and foundational-model regulation is key. Investors should assess whether a project's business model falls into a traditionally regulated industry (e.g., finance, healthcare), as this is where regulatory scrutiny is likely to be applied first, regardless of broader AI laws.

The National Security and Geopolitical Dilemma

  • Justin Hendricks observes that concerns about falling behind China dominate conversations on Capitol Hill, creating pressure to "take the shackles off industry." He questions this narrative, suggesting that simply empowering large tech companies may not be the best strategy for ensuring US competitiveness or national security.
  • Jeff Amico views attempts to cut China off from technology, such as through chip export controls, as a fallacy. He points to the success of Chinese models like DeepSeek, which achieved frontier performance despite hardware restrictions.
  • Jeff argues for a different approach: "the way to actually solve that isn't to try to handicap them... but instead it's actually to allow open source to proliferate." He believes open competition is inevitable and that fostering a vibrant open-source ecosystem is the best way for the West to stay ahead.

The Open-Source vs. Closed-Source Spectrum

  • Justin Hendricks frames this not as a binary choice but as a spectrum. He acknowledges the trade-offs, noting that while open-source is often seen as pro-competitive, large companies can also use an "open" strategy to cement a market advantage.
  • Jeff Amico makes a strong case for open-source, emphasizing its necessity for startups like Gensyn to compete. He argues that without access to powerful open-weight models like Meta's Llama, the innovation pipeline would be cut off, leaving the field to incumbents.
  • He also contends that open-source is fundamentally safer. With closed models, you are trusting one company to find and fix all bugs and be truthful about it. With open-source, "you're basically shining sunlight on the entire system," allowing a global community to identify and harden against risks, much like how Linux became a secure, dominant operating system.
  • Strategic Implication: The viability of many decentralized and crypto-native AI projects depends on the continued availability of high-performance open-source models. Regulatory moves that restrict open-source development represent a direct existential threat to this segment of the market.

Biggest Concerns: Environmental Impact vs. Regulatory Capture

  • Justin's Concern: The Environmental Footprint. Justin expresses deep concern over the massive, often-hidden environmental cost of the AI boom, from data centers reviving coal plants to the immense consumption of energy, water, and rare-earth minerals. He worries that states may be stripped of their power to regulate these industrial impacts if a federal moratorium passes.
  • Jeff's Concern: Regulatory Capture. Jeff’s biggest fear is that layers of complex regulation will shrink the pool of competitors, creating a future where "all of the models that we use every day and depend on are run by OpenAI and OpenAI only." This outcome would stifle innovation and concentrate immense power in the hands of one or two companies.

Finding Hope and a Path Forward

  • Justin finds hope in democratic engagement, citing community activists in Memphis organizing around the impact of a new data center. He believes that as long as people can make their voices heard, they can shape better, more equitable outcomes.
  • Jeff is optimistic that a balanced regulatory middle ground can be found, pointing to recent bipartisan legislation for blockchain and stablecoins as evidence that Congress can create sensible safeguards for complex technologies without choking off innovation.

Conclusion

This debate highlights that the future of AI hinges on balancing innovation with accountability. For investors and researchers, the key is to monitor both legislative proposals that could reshape the market and the competitive dynamics between open-source ecosystems and closed, proprietary platforms.

Others You May Like