a16z
August 26, 2025

How Scale AI is Pioneering the Future of Work

Ben Shariffstein, Head of Product for Enterprise Applications at Scale, joins the pod to unpack the "forward deployed" model that has quietly grown into a nine-figure business. He breaks down how custom AI agents are transforming Fortune 500s and why trading margin for moat is the winning strategy in this new platform shift.

The New Enterprise AI Playbook

  • "We're not trying to replace software; we're trying to augment or automate human work. What that means is it doesn't exist today in software. There aren't competitors."
  • "For those companies [top of every industry], they actually don't want the average of their peers...They don't want what's been productized for everyone else because they have an advantage, a secret sauce that has made them special over time."
  • Enterprise AI adoption lags the cutting edge by about 18 months, not due to model capabilities, but because of change management, data integrations, and security hurdles. The biggest demand is for custom solutions that capture a company’s unique "secret sauce."
  • Scale’s model combines its GenAI platform with forward deployed teams (engineers, PMs, ML experts) who embed with Fortune 500 clients. They build bespoke AI agents to solve core business problems, like those mentioned on earnings calls, and then feed those learnings back into the platform.

The 'Forward Deployed' Unicorn Role

  • "I really think of this [forward deployed PM] as a very unique role which is part PM, it's part founder...You have to have executive presence because you have to be able to pitch the AI vision for the company...you got to be really steeped in data and really understand AI."
  • The modern "forward deployed" team is a multi-disciplinary unit. It includes engineers for software and data, ML engineers for building and evaluating agents, and product managers who act as the "chief product officer for AI" within the client’s organization.
  • The ideal candidate is a "founder type": customer-obsessed, data-fluent, and willing to embrace "schlep blindness"—doing the unglamorous work like data migrations to solve a problem end-to-end. Their mandate isn't just to build, but to identify repeatable patterns that can be productized.

Trading Margin for Moat

  • "I would sacrifice a lot to go in there and own that [mission-critical context/data layer]. And then over time you're going to be able to...build all those primitives and those are going to be I think the most defensible businesses."
  • The biggest mistake an AI startup can make right now is prematurely optimizing for 80%+ SaaS gross margins. The real prize is capturing the critical data and workflows that will become the systems of intelligence of the future.
  • This services-led motion is a wedge. By solving messy, bespoke problems, companies can become an indispensable system of record, creating deep moats through high switching costs and proprietary data. The key is to charge for this implementation work to fund the moat-building and discover the true value you’re creating.

Key Takeaways:

  • Automate Humans, Don't Replace Software. The biggest opportunities are in augmenting human workflows that have never been codified in software. This requires a hands-on, problem-solving approach, not an off-the-shelf product.
  • 'Forward Deployed' Teams are the New Kingmakers. This hybrid role—part builder, part consultant, part visionary—is the essential bridge for getting complex AI into production within large enterprises, closing the gap between platform potential and real-world customer needs.
  • Sacrifice Near-Term Margin for Long-Term Moat. In this platform shift, obsessive margin-chasing is a fatal error. The winning move is to do the messy, hands-on implementation work to embed your solution, own the critical data layer, and build a truly defensible business.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals how Scale AI is pioneering a services-led, software-enabled "forward deployed" model to win the enterprise AI market, arguing that trading short-term margin for deep customer integration is the key to building a durable moat.

Inside Scale AI's Enterprise Applications Business

  • Ben Shariffstein, Head of Product for Enterprise Applications at Scale AI, explains that beyond its well-known data labeling business, Scale has a rapidly growing applications division. This unit works directly with governments and Fortune 500 companies to build and deploy custom AI agents that solve core business problems.
    • Instead of a one-size-fits-all product, Scale uses its GenAI Platform—an internal LLM Ops tool—to support a "forward deployed" team of engineers and product managers. LLM Ops (Large Language Model Operations) refers to the tools and practices for managing the lifecycle of large language models in production, including deployment, monitoring, and maintenance.
    • This team embeds with customers to build full-stack, custom applications tailored to their specific needs. This hands-on approach is necessary because the AI landscape is changing so rapidly that a fixed product set would quickly become obsolete.
    • Ben notes the business has found significant product-market fit in the last 12-18 months, growing into a nine-figure revenue stream as enterprises move past experimentation and into full-scale AI adoption.

The Shift in Enterprise AI Adoption

  • The conversation highlights a critical lag between the public hype around AI and its practical implementation in large organizations. Ben observes that enterprises are finally moving from small-scale pilots to deploying AI in production and seeking to expand its capabilities across their businesses.
    • Ben identifies an approximately 18-month delay between state-of-the-art AI developments seen on platforms like Twitter and their effective use within enterprises.
    • He argues that the primary obstacles were not model capabilities but challenges related to change management, data integrations, security, and defining new user experience paradigms.
    • According to Ben, the conversation with customers has evolved dramatically. "When I joined Scale about a year ago, we were seeing pilots... now what we're seeing is, hey, you actually have a bunch of things in production, but we want to really dramatically expand those capabilities."

Horizontal Platform vs. Vertical AI: A Strategic Trade-Off

  • The discussion contrasts Scale’s horizontal, custom-build approach with vertical AI companies like Harvey (for legal) or Decagon (for customer support). Ben argues that the world's largest companies require bespoke solutions to maintain their competitive edge.
    • Top-tier enterprises in sectors like banking, insurance, and telecom do not want standardized, off-the-shelf AI solutions. They aim to protect their unique "secret sauce"—proprietary data, processes, or customer distribution advantages.
    • Scale’s forward deployed model allows it to capture a company's specific nuances and embed them into the AI agents it builds, creating a solution that reinforces the client's unique market position.
    • While vertical AI companies can productize faster, Scale focuses on going "one to two layers down on productization," offering a semi-custom framework that can be tailored to complex, multi-faceted problems that don't fit neatly into a single vertical.

The "Forward Deployed" Model: An AI-Native Approach

  • Ben frames Scale’s strategy as building the "AI-native version" of Palantir's successful forward deployed model. This approach embraces the idea that the cost of software development is plummeting, making custom builds more economically viable than ever.
    • The core mission is to augment and automate knowledge work. Scale applies this principle to its own operations, building internal agents to handle tasks like data engineering.
    • A key belief driving this model is that "the cost of writing software is going to zero." This allows the team to solve immediate, high-value problems for customers without over-optimizing for a future that is fundamentally unpredictable.
    • This strategy prioritizes agility and direct problem-solving, using coding agents and other tools to rapidly build and iterate on solutions.

Navigating the New AI Platform Shift

  • Ben offers tactical advice for other companies building in the AI space, drawing a parallel to previous platform shifts. He explains that because AI automates human work that never previously existed in software, there are no established incumbents.
    • In mature software markets (e.g., Salesforce), value is created through configuration. In the new AI market, value is created by building solutions from the ground up to automate workflows previously done by humans.
    • The forward deployed motion serves as a wedge to install durable software. The initial custom work solves the customer's immediate problem and builds trust, paving the way for the platform to handle more of the workload over time.
    • A critical goal is to become a system of record (capturing data), a system of work (embedding in workflows), and ultimately a system of intelligence (autonomously performing the work).

Building a Moat in the Age of AI

  • The conversation explores how to build a sustainable competitive advantage, or moat, when the underlying AI models are rapidly commoditizing. The focus shifts from software itself to data and workflow integration.
    • Referencing Hamilton Helmer's "Seven Powers"—a classic business framework for identifying competitive advantages—Ben asserts that software itself was never a true moat.
    • The most relevant moats for AI companies are network effects (derived from proprietary data) and high switching costs (achieved by embedding deeply into critical workflows).
    • Scale’s unique advantage lies in its ability to "convert human knowledge into data that can be used to build AI." This creates a new, proprietary data asset—the standard operating procedures of a business—that is difficult for competitors to replicate.

Deconstructing the "Forward Deployed" Team

  • Ben details the composition and mandate of a forward deployed team, clarifying that it's more than just consultants who can code. The team is structured to solve problems holistically and feed insights back into the core platform.
    • The team consists of three distinct roles:
      • Forward Deployed Engineers: Handle data integration, infrastructure, and full-stack application development.
      • Forward Deployed Machine Learning Engineers: Build and evaluate AI agents, fine-tune models, and manage the machine learning lifecycle.
      • Forward Deployed Product Managers: Act as the voice of the customer, define requirements, and set an ambitious, AI-native vision, often serving as a "chief product officer for AI" for the client.
    • The ultimate goal of the forward deployed team is not just to deliver a custom solution but also to identify repeatable patterns. Ben states their mandate is to ask, "What are the things that we should bring back? What are the things that I can go back to my platform team and say, 'Hey, this is super important... and if I had this, I would be 10x faster.'

Managing Scope Creep by Embracing the "Schlep"

  • A major challenge for services-led businesses is managing scope creep. Ben advocates for embracing "schlep blindness," a term coined by Paul Graham to describe the willingness to do unglamorous but necessary work.
    • The job is to solve a problem, not just build a product. This often means tackling tedious tasks like data migrations or building dashboards, as only 30% of the end-to-end solution might involve cutting-edge AI.
    • The key to managing this is to ensure the "schlep" is strategic. The team must ask if solving an annoying, small problem creates a path to a larger, compounding opportunity.
    • If a task doesn't create new data, automate a core workflow, or lead to a bigger problem, it's likely not worth doing. The focus should be on problems that "get mentioned on earnings calls and move the stock price."

The "Trading Margin for Moat" Thesis

  • The host proposes that the forward deployed model represents a strategic decision to "trade margin for a moat." In an era of intense competition, sacrificing short-term profitability to deeply embed within a customer's operations can create immense long-term value.
    • This approach is a direct challenge to the high-margin, product-led growth (PLG) model that dominated the last software cycle. The most valuable prize is owning the new layer of intelligence that sits on top of existing systems of record.
    • By getting paid for implementation, companies can discover the true value they are creating and better inform their pricing strategy. This hands-on work uncovers whether a problem is a "level one fire or a level five fire."
    • A critical warning: this strategy only works if the company remains hyper-focused on its core vision. It's easy to get pulled into solving lucrative but irrelevant problems, ultimately failing to build a coherent, defensible platform.

Conclusion

  • This episode argues that the "forward deployed" model is a powerful, albeit resource-intensive, strategy for building defensible enterprise AI companies. For investors and researchers, this signals a shift where deep integration and workflow ownership may prove more valuable than the high gross margins prioritized in the previous software era.

Others You May Like