a16z
September 8, 2025

The Little Tech Agenda for AI

This podcast breaks down the AI policy landscape from the perspective of startups, featuring a16z’s policy leads Colin and Matt, who detail the fight for a pro-innovation regulatory framework that enables competition.

Advocating for the Little Guy

  • “There wasn't anyone who was actually advocating on behalf of the startups and entrepreneurs, the smaller builders in the space."
  • “If you're five people in a garage, how are you supposed to be able to comply with the same things that are built for a thousand person compliance teams? It's just not the same thing."

The "Little Tech" agenda was born from the realization that startups were an unrepresented voice in policy debates dominated by Big Tech. While incumbents can absorb the cost of complex regulations, a five-person team cannot, creating a massive barrier to competition. The agenda’s goal is not zero regulation, but smart regulation that fosters a vibrant, competitive ecosystem where new players can challenge the giants.

Regulate Use, Not Development

  • “Regulate use, do not regulate development somehow is interpreted as do not regulate."

The core of a16z's AI policy framework is simple yet widely misunderstood: punish harmful applications of AI, don't stifle its creation. This means using existing consumer protection, civil rights, and criminal laws to prosecute bad actors who leverage AI for illegal ends. This approach is often misconstrued as a call for a lawless free-for-all, when it actually advocates for robust enforcement of established laws rather than creating complex, pre-emptive compliance regimes that primarily benefit incumbents.

The AI Policy Battlefield

  • “There were ideas being proposed by not just the government but industry to require a license to build frontier AI tools and for it to be regulated like nuclear energy."

The current AI policy debate was ignited by panicked hearings where CEOs warned of "Terminator" scenarios, fueling a push for restrictive governance. Early proposals included licensing AI development like nuclear power—a move that would have cemented a market of only 2-3 major companies. The narrative has since shifted from "safety first" to a more balanced "we need to win while keeping people safe," recognizing the national security imperative of competing with China. The new front line is federal preemption, establishing a national standard to prevent an unworkable 50-state patchwork of regulations.

Key Takeaways:

  • The conversation around AI governance is a high-stakes battle for the future of innovation. The central conflict isn't if we should regulate, but how—and whether the rules will create a competitive market or an entrenched oligopoly.
  • Stop Regulating Ghosts. Policy should target concrete, illegal uses of AI under existing laws, not hypothetical future harms that require licensing regimes and kill startups before they can compete.
  • Compliance is a Competitive Moat. Regulations designed for trillion-dollar companies are a death sentence for startups. A 50-state patchwork of rules would be the final nail in the coffin for a competitive AI ecosystem.
  • Innovation Needs a Political War Chest. The pro-innovation camp has been outmaneuvered by well-organized "safetyism" advocates. Building political gravity through organized efforts like PACs is now essential to ensure America wins the AI race.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals the high-stakes policy battle shaping the future of AI, detailing how a proactive "Little Tech" agenda is fighting to create a competitive market for startups against regulatory frameworks that favor incumbents.

The “Little Tech Agenda”: A New Voice in AI Policy

Colin and Matt introduce "The Little Tech Agenda," a policy and advocacy framework created to represent the interests of startups and smaller builders in technology. Colin explains that while large, established tech companies have long had a presence in Washington D.C., their interests do not always align with those of emerging companies. The agenda was born from the realization that a five-person startup in a garage cannot comply with the same regulations designed for trillion-dollar corporations with thousand-person compliance teams.

Matt, who joined the firm after the agenda's release, notes it highlighted an "empty seat" in policy conversations. He observed that many proposed regulations, like extensive disclosure requirements, were created without considering their impact on resource-strapped startups. The core question driving their work is how to create regulatory frameworks that support, rather than stifle, competition for companies trying to challenge giants like Microsoft, OpenAI, and Google.

  • Strategic Implication: Investors should recognize that the policy landscape is not monolithic. Understanding the distinction between "Big Tech" and "Little Tech" interests is crucial for assessing the regulatory risk and competitive viability of early-stage AI ventures.

Smart Regulation, Not Zero Regulation

The conversation clarifies a common misconception: the agenda does not advocate for a complete absence of regulation. Matt emphasizes that the firm operates on 10-year fund cycles, meaning their goal is to foster a vibrant, healthy, and safe long-term ecosystem, not to chase short-term market spikes. Problematic AI products or public distrust would ultimately harm their financial interests.

Colin reinforces this by stating their interests are aligned with those of the United States—funding the cutting-edge companies that will drive jobs, national security, and the economy. However, they consistently encounter the perception that they want no rules at all.

  • Quote: Matt states, "I actually can't think of a single example across the portfolio in which we are arguing for zero regulation."
  • Actionable Insight: Researchers and investors should look beyond the "regulation vs. no regulation" binary. The critical analysis lies in what kind of regulation is being proposed and whether it promotes a competitive market or entrenches incumbents.

The Core Framework: Regulate Use, Not Development

The central pillar of their AI policy framework is to regulate the harmful use of AI, not its development. This distinction is frequently misinterpreted as a call for deregulation. Matt explains that this approach focuses on applying existing laws—such as consumer protection, civil rights, and criminal statutes—to actions performed with AI. This provides a robust legal foundation to address concrete harms without stifling innovation at the development stage.

  • Strategic Consideration: This framework suggests that AI companies building foundational models face less regulatory risk at the development stage, but the onus shifts to ensuring their application layers comply with existing laws. Investors should prioritize startups with a clear understanding of the legal implications of their specific use cases.

The History of the AI Policy Debate: Fear and Incumbent Influence

Colin traces the current policy climate back to Senate hearings in the fall of 2023, where testimony from major AI CEOs, filled with speculation about existential risks, "spooked Capitol Hill." This narrative, amplified by a decade of well-funded advocacy from the effective altruist community (a philosophical movement focused on using evidence and reason to find the most effective ways to benefit others, which has heavily influenced the AI safety conversation), created a powerful "safetyism" narrative. This pushed policymakers toward locking down the technology quickly.

This fear-driven environment led to the Biden executive order and subsequent state and federal proposals that the firm views as poorly conceived. Matt adds that after the perceived regulatory failures with social media, companies rushed to the White House to negotiate "voluntary commitments," a process that excluded all other current and future AI developers.

  • Actionable Insight: The history of the debate shows that policy can be driven by narratives as much as by technical reality. Investors must track the dominant narratives in Washington, as they directly influence the creation of regulations that can either create or destroy market opportunities.

Alarming Proposals and the Push for Centralization

The discussion highlights how close the industry came to facing extreme, innovation-killing regulations. Colin recounts a prevailing view within the previous administration that only two or three major companies would be able to compete in AI, necessitating a restrictive, government-like oversight model. This included proposals for a mandatory government license to build frontier AI models, regulating the technology like nuclear energy.

Matt notes that such a regime would be unprecedented for software and is antithetical to a competitive market. These ideas, along with potential bans on open-source models, were seriously considered and demonstrate the initial trajectory of the policy debate.

  • Quote: Colin warns, "The nuclear policy in the United States has yielded two, three new nuclear power plants in a 50-year period... if we do the same thing to AI... we lose to China full stop."
  • Strategic Implication: The threat of licensing regimes or open-source bans, while currently diminished, remains a tail risk. Investors should favor jurisdictions and policy frameworks that explicitly support open innovation and avoid creating high barriers to entry for new model developers.

The Rationale Behind Problematic Policy

Matt argues that many of these restrictive policy ideas were formed in "good faith" by policymakers viewing AI as a "doover opportunity" after feeling they were "asleep at the wheel" during the rise of social media. This bipartisan sentiment, however, led to policy concepts like licensing that would have ironically entrenched the very market concentration they previously criticized in social media.

The speakers also theorize that the AI policy debate has become a proxy for other unresolved issues. Just as crypto regulation is used as a venue to relitigate securities laws, AI policy is being used to address grievances with content moderation, algorithmic bias, and privacy that predate modern AI. This muddies the water and leads to complex, ineffective proposals like the one recently passed in Colorado, which creates a confusing "high-risk" vs. "low-risk" system for startups to navigate.

  • Actionable Insight: Researchers should be aware that AI legislation is often a "Trojan horse" for broader tech policy debates. Analyzing proposed bills requires looking beyond their stated AI-specific goals to understand their potential impact on data privacy, content moderation, and competition.

The Current State: A Shift Towards Innovation

The conversation shifts to the present, highlighting a more positive turn in federal policy. The national AI action plan signals a significant rhetorical shift. Before, the focus was on safety with a "splash of innovation." Now, the administration's stance is about ensuring the U.S. wins the AI race while keeping people safe.

  • Quote: Colin emphasizes the change: "Now it is we understand how important this is from a national security perspective... We need to make sure that we win while keeping people safe."

This new framing supports open source, rightsizing regulation for startups, and federal leadership over a chaotic patchwork of state laws. It also includes compelling, under-discussed initiatives like worker retraining and labor market monitoring to proactively address potential economic disruption from AI.

The Geopolitical Angle: Winning Against China

The discussion underscores that winning the AI race against China is a primary driver of the firm's policy thinking. This involves not only fostering domestic innovation but also carefully considering export controls. While supportive of preventing powerful US-made technology from falling into the hands of the Chinese military, Colin warns against overly restrictive policies that could inadvertently ban the export of American open-source models.

He argues that the U.S. faces a fundamental choice: either have the world use American products, which extends U.S. soft power, or cede the global market to Chinese technology by locking down its own.

  • Strategic Implication: The US-China tech competition is a powerful tailwind for policies that favor innovation and open-source development. Investors should monitor export control regulations closely, as they can directly impact the global reach and adoption of portfolio companies' technologies.

The Moratorium Failure and the Path Forward

Colin provides a post-mortem on the failed attempt to pass a federal moratorium on state AI laws. He attributes its failure to a misinterpretation of its scope, strong opposition from the "doomer crowd," and, most importantly, a lack of organization from the pro-innovation side of the industry.

The key lesson was that the tech industry and its allies were not organized enough to counter the well-established advocacy networks of their opponents. The path forward involves building a stronger coalition, clarifying policy goals through writing and media, and establishing a political center of gravity through new initiatives like the "Leading the Future" PAC.

  • Actionable Insight: Political advocacy is no longer optional for the AI sector. The failure of the moratorium highlights the need for a unified and well-funded political strategy. Investors and founders should consider supporting and participating in industry-wide advocacy efforts to shape a favorable regulatory environment.

Federal vs. State: Defining the Constitutional Lanes

Matt outlines the ideal regulatory structure based on constitutional principles. The federal government should lead on regulating AI development and interstate commerce, creating a unified national market. States, in turn, have a critical role in policing harmful conduct within their borders using their existing criminal and civil laws.

He introduces the Dormant Commerce Clause, a constitutional doctrine that prevents states from passing laws that excessively burden out-of-state commerce. This legal principle could serve as a guardrail against states creating a "50-state patchwork" of conflicting regulations that would be impossible for startups to navigate.

  • Strategic Consideration: The debate over federal preemption—where federal law supersedes state law—is the single most important structural issue for AI companies. A strong federal framework would provide regulatory certainty, while a patchwork of state laws would create significant compliance costs and legal risks.

Conclusion

The AI policy landscape is shifting from a defensive posture against restrictive regulation to a proactive fight for a framework that enables startup competition. Investors and researchers must track the federal preemption debate and support organized advocacy, as a unified national standard is critical for innovation and American leadership.

Others You May Like