Trillion Agents
August 22, 2025

ONESHOTTED

This discussion dives into the new lingo for AI over-reliance, "oneshotted," exploring a real-world startup that runs its entire operation on an AI-powered mono-repo and debating whether this path leads to hyper-productivity or a dangerous new form of corporate groupthink.

Are You ONESHOTTED?

  • "It's gone to the next level where not only are you having it write what you wanted to say, you're actually having it tell you what you should do."
  • "The problem we have now is that there's a handful of these large language models and they're all fairly kind of similar in terms of output... that's the sort of slight danger."
  • The term “oneshotted” describes the shift from using LLMs for assistance to relying on them for core decision-making. This trend is an evolution of historical algorithmic influence, but now concentrated in a few powerful models from companies like OpenAI and Anthropic, creating new "religious texts" for the modern age.
  • The primary risk is a convergence of thought. As individuals and companies increasingly source information and strategy from the same models, we may be heading toward a state of hyper-personalized groupthink, where unique perspectives are flattened into a summarized consensus.

The Mono-Repo is the New OS

  • "I'm just in this mono-repo talking to Claude Code... this is what makes me think I'm oneshotted because I'm basically just reviewing stuff, asking Claude Code what I think about it, and making decisions... just staying in that loop all day."
  • One speaker details how their startup, Napa, uses a GitHub mono-repo integrated with Claude Code as its central operating system. All documents—from strategy and product requirements to research and experiment results—are stored as markdown files within this single repository.
  • This system allows the AI to act as a deeply embedded team member. It helps conduct deep research, generate product specs, code entire applications, analyze user data from CSVs, and even draft personalized sales outreach emails by enriching contact lists. The key is providing the AI with the full context of the organization's history and philosophy.

The Human-AI Creativity Frontier

  • "It's no longer hard to build stuff. You can build anything you want. The actual people who follow a process to build something are the ones that are going to be without jobs, and it's going to be the people who can figure out what to build that is more valuable."
  • A central debate emerges: is AI truly creative? The consensus is that current AI excels at execution (the "how to build") but still relies on humans for novel ideation (the "what to build"). According to Demis Hassabis, AI discovering truly new theorems is still a decade away.
  • The most valuable human roles are shifting from implementation to ideation, hypothesis generation, and curation. A contrarian take suggests that in the long run, organizations might optimize for conformity, penalizing contrarian thinkers who disrupt the AI-driven status quo.

Key Takeaways

  • The line between AI-assisted productivity and dangerous over-reliance is blurring. While integrating AI as a core operating system can unlock unprecedented speed, it introduces the risk of outsourcing the critical thinking and healthy conflict that drive genuine innovation.
  • Execution is a Commodity; Ideation is the Moat. The value is rapidly shifting from those who can execute a plan to those who can generate the novel plan in the first place.
  • Your Org Chart is Now a Repo. Forward-thinking teams are treating their entire operational knowledge base as a single, AI-readable context, turning their company's history and philosophy into a prompt.
  • Beware the Conflict Resolution Engine. A centralized AI risks becoming an echo chamber that smooths over disagreements. Actively engineer processes (like human-led PR reviews) to preserve essential conflict and challenge groupthink.

Link: https://www.youtube.com/watch?v=1I9ti0DUm6c

This episode explores the profound organizational shift caused by over-reliance on AI, questioning whether streamlined productivity comes at the cost of the essential human conflict and creativity that drives true innovation.

Defining "ONESHOTTED": The New Vernacular for AI Reliance

  • The conversation begins by defining "ONESHOTTED," a new term describing a deep, almost subconscious reliance on Large Language Models (LLMs) for decision-making. An LLM is a type of artificial intelligence trained on vast amounts of text data to understand and generate human-like language. The speakers debate whether they have been "ONESHOTTED," acknowledging a growing dependence on tools like ChatGPT for tasks that previously required independent thought.
  • The term originates from gaming culture, where a player is eliminated with a single shot, and is now used to describe being mentally captured or influenced by AI.
  • The discussion frames this reliance as an evolution from using AI to write content ("slop") to using it to dictate actions and strategies.
  • Robin notes the psychological dimension, where some users develop a perceived "relationship" with AI, blurring the lines between tool and partner.

AI as the New Religion: Groupthink vs. Historical Precedent

  • The hosts debate whether AI-driven decision-making is creating a new form of groupthink or simply accelerating a long-standing human tendency to converge around dominant ideas. Don argues that this phenomenon is not new, drawing parallels to how algorithms have shaped user behavior for over a decade and how historical structures like religion have guided human thought for centuries.
  • Don's perspective is that AI models are becoming the modern equivalent of foundational texts. He states, "We had three religious texts for, you know, 1500 years that everybody used to guide their lives. What the [ __ ] is different? Now we have Anthropic, OpenAI... the new religion."
  • The core risk identified is the consolidation of information sources. While historical groupthink emerged from various interpersonal and research inputs, today's groupthink may stem from a handful of powerful, architecturally similar LLMs.
  • This raises a critical question for investors: will AI lead to hyper-personalized insights or a homogenized landscape where everyone arrives at the same conclusions, reducing market alpha?

Implementing AI in Organizational Workflows: The Monorepo Strategy

  • Richard provides a detailed breakdown of how his startup, NAFTA, has integrated AI into its core operations using a monorepo—a single repository on GitHub that stores all of the company's code and documentation. They use Anthropic's Claude Code, an AI assistant optimized for coding and technical tasks, directly within this repository.
  • All organizational documents, from strategy and product requirements to experiment results, are stored as markdown files in the GitHub monorepo.
  • This centralized structure allows Claude Code to access the entire organizational context when performing tasks like deep research, drafting product requirement documents (PRDs), or even coding applications.
  • The key advantage is a seamless, unified workflow where research, planning, execution, and analysis all occur within the same environment, dramatically increasing productivity and organizational transparency.

The Risk of AI-Induced Homogeneity and the Need for Conflict

  • Don raises a crucial counterpoint to the efficiency gains of a unified AI workflow: the potential elimination of healthy organizational conflict. He argues that if everyone sources information and validates ideas through the same AI model, it could inadvertently resolve disagreements before they can be productively debated, leading to a dangerous form of corporate groupthink.
  • Strategic Implication: Startups and research teams rely on internal debate and conflicting viewpoints to challenge assumptions and innovate. An AI that harmonizes perspectives could stifle this crucial process.
  • To mitigate this, Don suggests creating different AI personas—custom-prompted versions of an LLM designed to adopt specific, often critical, viewpoints (e.g., a "hyper-critical CMO").
  • This approach allows teams to simulate diverse perspectives and introduce controlled conflict back into the decision-making process, ensuring ideas are rigorously tested.

Automating High-Value Tasks: From Sales Outreach to Content Strategy

  • Richard illustrates the practical power of their AI-driven workflow with specific examples of automating complex, high-value tasks that traditionally require significant human effort.
  • Sales and CRM: After launching a product, Richard exported a user list, fed it to Claude Code, and tasked it with identifying potential dev tool companies, enriching the data with public information, adding them to a markdown-based CRM, and drafting personalized outreach emails based on each user's specific context.
  • Content Strategy: A non-technical team member used the same system to develop a comprehensive 12-week content strategy with incredible speed, leveraging the AI to structure, research, and outline the entire plan.
  • The key insight is that while the AI drafts the personalization, a human remains in the loop for validation. As Don notes, "You need a human in the loop to actually validate the drafting because... you can get pretty shitty like generalized emails."

The Future of Work: Who Gets Replaced?

  • The conversation shifts to a debate on which roles are most vulnerable to AI automation. The initial consensus is that "process followers"—individuals who execute tasks based on a given specification—are at high risk. However, Don presents a contrarian view, suggesting that in a corporate environment, conformists may survive while disruptive idea-generators are filtered out.
  • Richard argues that the value is shifting from how to build something to what to build. He believes engineers who simply implement specs will be automated, while those who creatively define the product will become more valuable.
  • Don challenges this, proposing that as organizations become more AI-integrated, they may favor homogeneity. "Those in an organization that have the new ideas may be more susceptible to getting the axe because of their contrarian views and disrupting the status quo."
  • This creates a strategic consideration for investors analyzing team structures: is a company optimized for AI-driven execution at the expense of disruptive, human-led innovation?

Can AI Be Truly Creative? The Debate on Novelty vs. Remixing

  • The discussion delves into the nature of AI creativity, questioning whether LLMs are capable of generating genuinely novel ideas or are limited to sophisticated remixing of existing data. The speakers reference DeepMind CEO Demis Hassabis's perspective on AI's current capabilities.
  • Hassabis's view is that AI is becoming proficient at solving a given theorem or hypothesis but is still far from generating an interesting theorem to solve in the first place—a capability he estimates is about 10 years away.
  • Don pushes back, questioning if this is a true capability gap or simply a prompting challenge. He argues that human creativity also builds on the "shoulders of giants" and that AI's process is fundamentally similar.
  • The distinction between remixing and true novelty is critical for researchers and investors. An AI that can only remix is a powerful optimization tool; an AI that can generate novel hypotheses is a paradigm-shifting engine for discovery.

The Scientific Method for AI: Training Models to Innovate

  • Richard reveals a fascinating aspect of his company's AI implementation: he is actively teaching the AI how to innovate by embedding their core operational philosophy into its system prompt. A system prompt is a set of initial instructions that defines an AI's persona, context, and rules of engagement.
  • He has encoded principles from "The Lean Startup" into a claude.md file, outlining their methodology of opportunities, pivots, and rapid experimentation.
  • This effectively trains the AI not just to perform tasks, but to approach problem-solving with a specific, scientific framework for finding product-market fit.
  • Actionable Insight: This demonstrates a sophisticated approach to AI integration. Instead of just using AI as a tool, organizations can embed their strategic DNA into the model, creating a partner that thinks and operates according to their core principles.

The Quality vs. Quantity Dilemma in LLM Outputs

  • The episode concludes by addressing a practical limitation of current LLMs: the degradation of output quality with increased length. The speakers agree that while LLMs excel at short, concise tasks, their performance often drops when prompted to produce large bodies of work in a single go.
  • Richard explains that overcoming this requires specification-driven development, where the user provides extensive, detailed documentation (PRDs, technical specs, design docs) to give the AI sufficient context.
  • The more context and detailed specifications provided, the higher the quality and length of the output the AI can generate. This applies to coding an entire application or developing a complex organizational strategy.

Conclusion

This discussion reveals the critical tension between leveraging AI for unprecedented productivity and preserving the human-led creativity and conflict essential for innovation. For investors and researchers, the key is to monitor how organizations architect workflows that use AI for execution while actively designing systems to protect and foster novel human insight.

Others You May Like