AI Engineer
December 12, 2025

Moving away from Agile: What's Next – Martin Harrysson & Natasha Maniar, McKinsey & Company

AI isn't just a new tool; it's a tectonic shift demanding a complete overhaul of how software gets built. Martin Harrysson and Natasha Maniar from McKinsey & Company lay out the stark reality: most enterprises are seeing only marginal gains from AI because they're trying to fit a square peg (AI-native development) into a round hole (legacy Agile operating models). The real opportunity lies in fundamentally redesigning workflows, roles, and incentives for an AI-first world.

The AI Productivity Paradox

  • “We're in a place where there's a bit of a disconnect between this big potential around AI... from the reality. As we've started implementing AI... what has started to emerge is a set of bottlenecks that were not necessarily there before.”
  • Marginal Gains: Despite individual developers seeing massive productivity boosts (tasks shrinking from days to minutes), enterprise-wide improvements average a paltry 5-15%. The tools work, but the system doesn't.
  • New Bottlenecks: AI accelerates code generation, but exposes new chokepoints. Manual code review becomes a bigger bottleneck, and unchecked AI-generated code can amplify technical debt – like a super-efficient but messy chef who cooks fast but leaves a giant cleanup.
  • Uneven Impact: AI's effectiveness varies by task (e.g., legacy modernization vs. greenfield features) and individual skill, making work allocation a complex, inefficient puzzle for engineering managers.

Redesigning for AI-Native Workflows

  • “AI-native roles essentially means that we're moving away from the two-pizza structure to one-pizza pods of three to five individuals... product builders are managing and orchestrating agents with full-stack fluency.”
  • Beyond Agile: The traditional Agile model (8-10 person teams, two-week sprints, story-driven development) was built for human constraints. AI demands a new operating model.
  • Context-Specific Models: There's no one-size-fits-all. Legacy modernization might use an "agent factory" (humans spec, agents build, humans review), while new features benefit from iterative loops where agents co-create.
  • AI-Native Roles: Teams shrink to "one-pizza pods" (3-5 people). Roles consolidate; product builders become orchestrators of agents with full-stack fluency. Product Managers iterate on specs directly with agents, not long PRDs.
  • Top Performers' Edge: Leading companies are 7x more likely to have AI-native workflows (scaling AI across 4+ SDLC use cases) and 6x more likely to have AI-native roles, leading to 5-6x faster time-to-market.

Scaling AI Transformation: The Human Challenge

  • “The crux to actually scaling this is often about getting 20, 30, or even more things right at the same time that involve the way you communicate, what this means, the way you incentivize people, the way you upskill them, and it all has to come together.”
  • Change Management is Key: Scaling AI isn't just about tools; it's about getting "20-30 small things right" simultaneously – communication, incentives, upskilling, and measurement. It's like orchestrating a symphony; every instrument needs to be in tune for the whole to sound good.
  • Upskilling & Incentives: Hands-on training, "bring your own code" labs, and dedicated coaches are crucial. If roles aren't redefined and incentives aren't aligned, 70% of companies won't see meaningful change.
  • Robust Measurement: Top performers measure outcomes (time to market, quality, customer satisfaction, economic impact) not just adoption or velocity. Bottom performers often don't measure speed or productivity at all, trying to navigate without a compass.

Key Takeaways:

  • Strategic Implication: The "Agile" era is ending. AI demands a new, more fluid, and context-aware operating model for software development.
  • Builder/Investor Note: Look for (or build) companies that are fundamentally redesigning their SDLC, team structures, and roles around AI, not just bolting on tools. This includes robust, outcome-based measurement.
  • The "So What?": The next 6-12 months will separate the AI-native leaders from the laggards. Those who embrace this human and organizational transformation will unlock exponential value; others will be stuck with marginal gains.

Podcast Link: Link

This episode exposes a critical disconnect: while AI offers unprecedented individual productivity gains, enterprises remain stuck with marginal 5-15% improvements, bottlenecked by outdated Agile operating models.

The AI Productivity Paradox

  • Martin Harrysson opens by asserting that AI represents a paradigm shift in software development, akin to the advent of Agile two decades ago. Despite individual developers leveraging AI agents for tasks that once took days, enterprise-wide productivity gains remain surprisingly low.
  • Martin recalls his early career during Agile's adoption, highlighting its transformative impact on software development.
  • Today, AI tools enable individual developers to complete tasks in minutes that previously required hours or days.
  • However, a McKinsey survey of 300 enterprises reveals average company-wide productivity improvements of only 5-15%.
  • This gap stems from new bottlenecks: collaboration models fail to keep pace with accelerated development, manual code review processes are overwhelmed by increased code generation, and AI-generated code often amplifies technical debt.

"There's a bit of a disconnect between this big potential around AI... from the reality." – Martin Harrysson

Agile's Obsolete Constraints

  • The current Agile operating model, designed for human-centric development, now acts as a rate-limiter for AI-driven teams. Traditional structures and processes hinder the full realization of AI's potential.
  • AI's impact is highly uneven: some tasks see massive improvements, others minimal, creating allocation challenges for team leaders.
  • Agents often receive "fuzzy" requirements, leading to code that doesn't meet intent, forcing more manual review.
  • Most large companies remain "stuck in a world of relatively marginal gains," operating with 8-10 person teams and two-week sprints—elements of an outdated Agile model.
  • McKinsey's work with clients demonstrates that breaking these traditional models through smaller teams, new roles, and shorter cycles unlocks significant performance improvements.

"Most large companies today are stuck a little bit in a world of relatively marginal gains... working in ways that was developed with constraints that we had in the past paradigm of human development." – Martin Harrysson

Forging AI-Native Operating Models

  • Natasha Maniar reveals that top-performing enterprises are fundamentally rewiring their Product Development Life Cycle (PDLC) to be AI-native, moving beyond point solutions to integrated workflows and redefined roles.
  • Top performers are seven times more likely to implement AI-native workflows, scaling AI across at least four Software Development Life Cycle (SDLC) use cases.
  • They are six times more likely to adopt AI-native roles, featuring smaller, specialized "pods" with consolidated skill sets.
  • Different engineering functions require tailored AI operating models: "factories of agents" (humans provide initial specs, final review) for modernizing legacy codebases, and "iterative loops" (agents as co-creators) for new features.
  • These shifts require continuous upskilling, impact measurement, and new incentive structures for developers and Product Managers (PMs).

"Rewiring the PDLC is not just a one-size-fits-all solution... different types of engineering functions... may require different operating models based on how humans and agents best collaborate." – Natasha Maniar

Redefining Roles and Team Structures

  • The integration of AI agents necessitates a radical transformation of traditional developer and product manager roles, shifting focus from execution to orchestration and direct prototyping.
  • Engineers transition from simply writing code to becoming orchestrators, strategically dividing work for AI agents.
  • Product Managers evolve to create direct prototypes in code, iterating on "specs" (specifications) with agents rather than relying on lengthy Product Requirement Documents (PRDs).
  • The traditional "two-pizza team" structure (8-10 people) gives way to "one-pizza pods" (3-5 individuals) with consolidated roles, fostering full-stack fluency and a deeper understanding of the codebase architecture.
  • Despite the clear benefits, approximately 70% of surveyed companies have not yet changed their roles, creating a significant barrier to AI adoption and impact.

"Engineers are moving away from execution and just simply writing code to being more of orchestrators and thinking through more how to divide up work to agents." – Martin Harrysson

Scaling AI Across the Enterprise: The Change Management Imperative

  • Scaling AI beyond individual teams to hundreds or thousands of employees demands a comprehensive, multi-faceted change management strategy, addressing communication, incentives, and upskilling simultaneously.
  • Initial rollouts of AI tools often see usage drop-off or suboptimal adoption without proper organizational support.
  • Effective scaling requires "getting 20-30 or even more things right at the same time," encompassing clear communication, tailored incentives, and hands-on upskilling.
  • McKinsey's client interventions include assigning sprint stories with agents, co-creating prototypes with agents for security/observability, and reorganizing squads by workflow (e.g., bug fixes vs. greenfield development).
  • These interventions led to a 60x increase in agent consumption, a 51% increase in code merges, and improved delivery speed tied directly to business priorities.

"Change management... is about getting a lot of like small things right. And so the crux to like actually scaling this is often about getting 20, 30 or even more things right at the same time." – Martin Harrysson

The Outcome-Driven Measurement Framework

  • To truly unlock AI's value, organizations must move beyond simple adoption metrics to a holistic, outcome-focused measurement system that tracks inputs, outputs, outcomes, and economic impact.
  • A surprising finding: bottom-performing enterprises often fail to measure speed, and only 10% track productivity.
  • McKinsey advocates a "MECI framework" (Inputs, Outputs, Outcomes, Economic Outcomes) to monitor progress and pinpoint issues.
  • Inputs include investment in AI tools and resources for upskilling/change management.
  • Outputs track adoption breadth/depth, velocity, and capacity, alongside developer Net Promoter Score (NPS) and code quality/resilience (e.g., Mean Time To Resolve priority bugs).
  • Economic Outcomes focus on C-suite priorities: time to revenue, increased price differential for features, customer expansion, and cost reduction per pod.

"Building a robust measurement system that prioritizes outcomes and not just adoption is important not just to monitor progress but also pinpoint issues and course correct quickly." – Natasha Maniar

Investor & Researcher Alpha

  • Capital Reallocation: Expect significant capital shifts towards AI-native SDLC platforms, comprehensive developer upskilling programs, and organizational change management consultancies specializing in AI integration. Investment in traditional Agile tooling without AI adaptation will diminish.
  • New Bottlenecks: The primary bottleneck for enterprise AI adoption is no longer compute or model capability, but organizational inertia, outdated operating models, and the absence of outcome-driven measurement systems. Solutions addressing these "human-in-the-loop" and process challenges will capture outsized value.
  • Research Direction: Research into AI-driven team topologies, dynamic work allocation algorithms, and AI-native quality assurance/security frameworks will yield high returns. Purely individual productivity tools, without consideration for team collaboration and organizational scaling, risk becoming commoditized.

Strategic Conclusion

The era of traditional Agile is ending. Enterprises must embrace a new, AI-native software development model characterized by smaller, more numerous teams, redefined roles, and continuous, outcome-driven processes. The next step for the industry is a fundamental organizational rewiring, starting now with bold ambition.

Others You May Like