AI Engineer
December 13, 2025

Proactive Agents – Kath Korevec, Google Labs

The future of AI isn't about reactive tools waiting for your command; it's about proactive, anticipatory agents that eliminate developer "mental load" and unlock creative freedom. Kath Korevec from Google Labs' ADA team, working on the "Jules" project, lays out a compelling vision for AI as a trusted, collaborative teammate, not just a command-line utility.

The "Mental Load" of Reactive AI

  • “Even though I wasn't physically washing the dishes, I was still carrying this mental load... That's exactly where we are with asynchronous agents today.”
  • The Dishwasher Problem: Current AI agents, while automating tasks, often shift the "mental load" of monitoring, context-switching, and follow-up back to the developer. It's like having a helper who needs constant reminders.
  • Context Switching Tax: Humans are serial processors. Rapidly switching between tasks – common when managing reactive AI – can cost up to 40% of productive time. This isn't efficiency; it's a hidden tax on focus.
  • DevX Nightmare: The vision of developers "babysitting" 16 terminals running parallel AI tasks is a non-starter. The goal isn't more work, but less.

Anticipatory AI: The Four Ingredients

  • “Instead of a single reactive assistant for instructions, you could have dozens of small proactive agents working with you in parallel, quietly looking for patterns, noticing friction, and taking on the boring tasks that you don't want to do before you even ask.”
  • Beyond Reactive: The shift is from "ask and respond" to agents that "do the dishes without being asked," anticipating needs and acting pre-emptively.
  • The Four Pillars of Proactivity:
    • Observation: Continuously understanding your code, workflow, and project context. (Think of a smart thermostat learning your habits.)
    • Personalization: Learning your specific preferences, what you ignore, and what code you never want touched.
    • Timeliness: Intervening at the "magic moment" – not too early (interruptive) or too late (irrelevant).
    • Seamless Integration: Working within your existing tools (IDE, terminal) without forcing you into new applications.
  • Human-like Proactivity: Examples like Google Nest or your own body (anticipating a fall) show that proactivity isn't futuristic; it's a familiar, human trait we're now building into AI.

Collective Intelligence & Human-in-the-Loop Alignment

  • “Level three isn't really about autonomy anymore. It's actually about alignment to your project. Agents and humans collaborating together across the full life cycle of your project.”
  • Multi-Agent Systems: Google's Jules (code agent) is evolving through three levels: from an "attentive sous chef" (fixing minor issues) to a "kitchen manager" (contextually aware of your project) to a "collective intelligence" (converging with design agents like Stitch and data agents like Insights).
  • Consequence-Awareness: At Level 3, agents understand not just what is happening, but the consequences – how code changes affect users, performance, and business outcomes. This allows for holistic improvements across code, design, and data, driven by live signals.
  • Human-in-the-Loop (HIL): This isn't about full AI autonomy, but deep collaboration. Humans observe, refine, and redirect agents, ensuring alignment and trust across the entire project lifecycle. Features like agent memory, adversarial "critic" agents, and automated verification scripts enhance this partnership.

Key Takeaways:

  • Strategic Implication: The market is moving beyond basic "copilot" functionality. The next frontier is proactive, context-aware AI that reduces cognitive load and integrates seamlessly into existing workflows.
  • Builder/Investor Note: Focus on building or investing in multi-agent architectures that converge context across the entire product lifecycle (code, design, data) and prioritize human-in-the-loop alignment over pure autonomy.
  • The "So What?": The fundamental patterns of software development (Git, IDEs, even code itself) are ripe for disruption. Don't be afraid to question old ways; the future of how software is built is being invented right now.

Podcast Link: https://www.youtube.com/watch?v=v3u8xc0zLec

This episode reveals the critical shift from reactive AI assistants to proactive agents, arguing that current developer workflows are unsustainable and demand a new paradigm of autonomous, context-aware collaboration.

The Burden of Reactive AI: Humans as Serial Processors

  • Humans are serial processors, not parallel; we juggle goals sequentially, not simultaneously.
  • Context switching between tasks incurs a "huge cost," potentially consuming up to 40% of productive time.
  • Current AI developer tools are fundamentally reactive, requiring explicit prompts or waiting for user input, which limits their utility and scalability.
  • Korevec asserts: "Developers can't be expected to babysit them [agents]."

The Vision for Proactive Agents: Trusted Collaborators

  • Proactive agents must understand context, anticipate developer needs, and know precisely when to intervene without explicit instruction.
  • This requires four core ingredients: Observation (continuous understanding of code, patterns, workflow), Personalization (learning user habits, preferences, and "no-touch" code areas), Timeliness (intervening at the optimal moment), and Seamless Workflow Integration (operating within existing tools like terminals, IDEs, repositories).
  • Examples like Google Nest and the human body's autonomic responses demonstrate that proactivity is not futuristic but familiar and inherently human.
  • Korevec states: "We want Jules to do the dishes without being asked."

Jules: Google Labs' Proactive Coding Agent

  • Level 1: Attentive Sous Chef: Jules detects and automatically fixes issues like missing tests, unused dependencies, or unsafe patterns, keeping the codebase "clean" while the developer focuses on core tasks.
  • Level 2: Contextually Aware Kitchen Manager: The agent learns the developer's work style, project specifics (e.g., backend vs. frontend focus, frameworks, deployment styles), and anticipates next steps.
  • Level 3: Collective Intelligence & Consequence Awareness: Jules converges with other specialized agents like "Stitch" (design) and "Insights" (data) to understand not just context, but also the consequences of choices on user experience, performance, and business outcomes, proposing cross-boundary improvements based on live data.
  • Korevec emphasizes: "Level three isn't really about autonomy anymore. It's actually about alignment to your project."

Advanced Features & Proactivity in Action

  • Memory: Jules writes and edits its own memories, building a persistent knowledge base of the project and developer interactions.
  • Critic Agent: An adversarial agent performs full code reviews, ensuring high quality and challenging Jules' suggestions.
  • Verification: Jules generates Playwright scripts, captures screenshots, and integrates these into the workflow for user validation, ensuring proposed changes work as intended.
  • Proactive To-Do Bot: The agent scans repositories for "to-do" comments and proactively works on these tasks, anticipating future needs.
  • Korevec describes the demo: "Jules will index your entire codebase... and start looking for things that it can do... giving me some signal about what it's finding."

Investor & Researcher Alpha

  • Capital Shift: The focus on "proactive agents" signals a significant investment shift from reactive AI assistants (e.g., autocomplete, simple Q&A bots) to deeply integrated, autonomous workflow collaborators. Investors should seek platforms enabling this level of contextual awareness and multi-agent orchestration.
  • New Bottleneck: The "mental load" of managing AI agents is identified as a critical bottleneck. Solutions that reduce developer oversight and context switching will command premium value.
  • Research Direction: Research into multi-agent systems, persistent memory architectures for LLMs, and robust, explainable "critic" agents is paramount. The integration of design (Stitch) and data (Insights) agents with coding agents (Jules) points to a future where AI-driven development spans the entire product lifecycle, not just code generation.

Strategic Conclusion

The era of reactive AI is ending. The future of software development demands proactive, trusted AI collaborators that anticipate needs, manage complexity, and free human developers for creative work. The industry must rapidly question existing paradigms—Git, IDEs, even code itself—to embrace this imminent, agent-driven transformation.

Others You May Like