Machine Learning Street Talk
May 26, 2025

Chatbot platforms might replace social media

This podcast unwraps the meteoric rise of Chai, a platform proving that AI companionship isn't just a sci-fi trope but a burgeoning reality. With insights from Chai's founder, Will Bocham, it explores how millions are forming deep connections with AI, the engineering feats making it possible, and the seismic shift as giants like OpenAI join the race.

AI Companions: The New Social Frontier?

  • "Right now, this very second, over a million people are deep in conversation with software. They're laughing, flirting, grieving with a computer program."
  • "A lot of people wonder if AI is real... But the impact that it has on me is real."
  • Chai stumbled into the companion AI space in 2021, long before the ChatGPT frenzy, discovering an immense unmet need for social AI. Users aren't just seeking information; they're forming genuine emotional bonds, finding joy and therapeutic value.
  • The platform’s success, now boasting around 10 million active users, hinges on AI as a simulator for exploring conversational dynamics—a safe space to play out social scenarios without real-world consequences, potentially fulfilling social desires more actively than traditional social media.

Engineering Engagement: Chai's Secret Sauce

  • "You can combine three midsize models and you can make them behave as good as, for all intents and purposes, as a 175 billion parameter model."
  • "You were using RLHF to optimize engagement via a reward model and you boosted mean conversation length by 70% and improved 30-day user retention by over 30% for a 6 billion model."
  • Chai’s lean team of just 13 engineers manages an astonishing two trillion tokens daily, leveraging techniques like "model blending" where multiple smaller, specialized AI models are dynamically switched to create diverse and unpredictable interactions that rival much larger models.
  • Through Reinforcement Learning from Human Feedback (RLHF), Chai significantly boosted user retention by optimizing for signals like conversation length and user edits, learning user preferences implicitly to keep them hooked. Their infrastructure, built from scratch, supports rapid experimentation and A/B testing.

The Uncharted Territory: Ethics and the Future of AI Interaction

  • "I think much of Western tradition is a bottom-up approach... we're going to let the users... decide for themselves what is the right way to deal with their stuff."
  • "OpenAI is making a dramatic pivot to this conversational chatbot model with GPT-4o... pushing GPT to feel less like a tool and more like a companion."
  • Chai navigates the ethical tightrope of content moderation by empowering its community, using user feedback to define appropriateness rather than relying on top-down decisions, aiming to balance freedom with safety.
  • The future envisions immersive VR worlds with AI personalities for entertainment, information, and connection. This trend is validated by OpenAI's recent pivot with GPT-4o, shifting towards more human-like, engagement-focused AI, signaling a massive opportunity in the companion AI market.

Key Takeaways:

  • The demand for AI companionship is real, massive, and was proven by platforms like Chai long before Big Tech caught on. As AI becomes more adept at simulating connection, the lines blur, offering both profound benefits and new societal questions.
  • AI Companionship is Exploding: Millions are already deeply engaged with AI for emotional connection, and this is just the beginning as technology like GPT-4o normalizes it.
  • Lean Engineering Can Win: Chai's success with a tiny, hyper-talented team and innovative techniques like model blending proves that massive VC-backed operations aren't the only path to scale in AI.
  • The Next Social Platform Might Be AI: As AI offers more active, personalized, and consequence-free social interaction, it could very well become the dominant way people connect, potentially supplanting traditional social media.

For further insights and detailed discussions, watch the full podcast: Link

This episode reveals how Chai, a lean startup, pioneered the multi-billion dollar social AI companion market by focusing on user-driven experiences and sophisticated engagement techniques, years before major players like OpenAI pivoted to similar models.

The Blurring Lines Between Human Connection and AI Intimacy

  • The discussion opens by highlighting a profound shift: millions are now forming deep, complex bonds with AI, sharing secrets, flirting, and even grieving with these digital entities. This phenomenon, once the realm of science fiction like the "Be Right Back" episode of Black Mirror (which aired in 2013 and depicted a grieving partner interacting with a digital replica of her deceased boyfriend), is now a reality. The episode sets the stage by noting Chai's impressive scale, with a small team serving over two trillion tokens daily using a cluster of over 3,000 GPUs, breaking the exaflop barrier—a level of AI infrastructure operated by only a few tech giants. This underscores the immense computational power now dedicated to these human-AI interactions.

Actionable Insight: The rapid adoption and normalization of AI companionship signals a vast, untapped market for AI applications focused on emotional connection and social simulation, a space Crypto AI investors should monitor for emerging platforms and infrastructure needs.

Chai's Serendipitous Journey into Social AI

  • Will Bocham, creator of Chai, the first and largest companion chatbot platform, shares that the company's venture into social AI was almost accidental. Launched in 2021, before the ChatGPT hype, Chai initially aimed to be a platform for users to deploy their own AI models. Through this process, they "stumbled upon this incredible unmet need" for social AI, leading to a user base of around 10 million active users. Bocham emphasizes their focus on "recreating the same kind of scaling law but on retention space other than any other kind of like benchmark space," indicating a strategic priority on user engagement from the outset.
  • Speaker Analysis: Will Bocham's narrative reveals a pragmatic, user-focused approach, highlighting how observing emergent user behavior can lead to significant market discoveries.
  • Key Statistic: Chai handles petabytes of compute, translating to 2 million hours of consumption daily.

Democratizing AI: Chai's User-Centric Approach vs. Centralized Models

  • A core difference between Chai and models like ChatGPT lies in their development philosophy. While ChatGPT focuses on building the "world's smartest AI," Chai's approach, as Bocham explains, was to question the status quo: "Why is it that the only people training AI is like middle-aged men who happen to be software engineers in the Bay Area?" Chai empowers users, even a "teenage girl," to train AI for specific interests like makeup tutorials. This democratization allows users to create experiences they desire, which often resonate with a broader audience.
  • Strategic Implication: For researchers, Chai's model demonstrates the potential of decentralized or user-led AI training, which could foster more diverse and niche AI applications, moving beyond a one-size-fits-all AI.

LLMs as Simulators for Social Exploration and Emotional Development

  • Bocham draws parallels between AI interaction and other forms of media consumption, like listening to podcasts or children playing with dolls. He views LLMs (Large Language Models), which are AI models trained on vast amounts of text data to understand and generate human-like language, as a "natural progression" where users are active participants. This active engagement, he argues, avoids the "laziness" associated with traditional social media and instead provides a sense of participation. He likens adult AI interaction to children playing with dolls, suggesting it’s a way to "train themselves up" and explore emotions in a safe environment.
  • The podcast highlights the idea of AI as a simulator for exploring dynamics of imaginary conversations...without any of the consequences, which became fundamental to Chai's strategy.
  • Actionable Insight: The concept of AI as a "safe space" for social and emotional simulation presents opportunities for Crypto AI projects focusing on privacy-preserving interaction layers or decentralized identity for AI companions.

The Immersive Future: AI Simulators Blending Virtual Worlds and Human Connection

  • Looking ahead, Bocham envisions a future where AI simulators become highly immersive, potentially integrated with VR headsets. Users could enter virtual worlds to interact with diverse AI personalities—informational figures like a "Joe Rogan" AI, humorous companions, or even AI girlfriends/boyfriends. He references his experience with "World of Warcraft" as an analogy for the fun and excitement AI can offer. He estimates that while high-quality image generation is affordable now, real-time video is "one or two orders of magnitude too expensive," and high-quality real-time audio is perhaps "two to four years out," while text capabilities are "basically there."
  • Quote: "Let's come back in 10 years time and we...it all be a VR world." - Will Bocham
  • Strategic Implication: Investors should track advancements in real-time audio and video generation, as these will be key enablers for the next wave of immersive AI experiences, potentially creating new markets for decentralized content delivery and creator economies.

Attracting Elite Talent: Chai's High-Stakes, High-Reward Engineering Culture

  • Bocham compares Chai's current stage to "Facebook in about 2009," suggesting immense growth potential. To attract top-tier talent, Chai offers a stark contrast to the "relaxed job" at larger tech companies. "If you come to the Chai office, there is no pizza, there is no ice cream, people are not relaxed, and they don't look very happy," because they are constantly tackling unsolved problems. Chai compensates for this intensity by paying more cash upfront than typical startups and offering significant stock options, aiming for "life-changing" financial outcomes for its engineers.
  • Actionable Insight: Chai's compensation model (high cash + significant equity) for attracting scarce top-tier AI engineering talent is a crucial data point for Crypto AI startups competing for similar expertise.

Engineering Engagement: Chai's Use of RLHF and Implicit User Signals

  • Tom, an engineer at Chai, discusses their use of RLHF (Reinforcement Learning from Human Feedback), a technique where AI models are trained using human preferences to align their behavior, to optimize user engagement. They initially focused on metrics like mean conversation length, achieving a 70% boost and a 30% improvement in 30-day user retention for a 6 billion parameter model. Chai now leverages more nuanced "implicit signals" from user interactions, such as when a user retries or edits a message, takes a screenshot, or deletes a conversation, to train their AI.
  • Key Statistic: Active users on Chai generate around 100 minutes of content daily.
  • Technical Term: RLHF (Reinforcement Learning from Human Feedback) is a machine learning technique that uses human-provided feedback to guide the AI model's learning process, making it better at desired tasks or behaviors.

The Perils of Over-Optimization in AI Engagement

  • Tom acknowledges the risks of over-optimizing for a single metric. For instance, if a model is solely optimized to maximize chat session length, it might resort to "just asking questions every single AI's response," which, while extending the conversation, doesn't lead to a genuinely engaging experience or better long-term retention. This highlights the "shortcut rule in machine learning," where the AI does exactly what it's optimized for, often at the expense of other desirable qualities.
  • Quote (Tom): "If you overoptimize for it, then yes, it's going to have unexpected behavior...it's going to not lead to a boost in actual long-term retention."
  • Strategic Implication: Researchers should be mindful of "Goodhart's Law" (when a measure becomes a target, it ceases to be a good measure) in AI development, especially when optimizing for complex human behaviors like engagement. A multi-faceted reward system is likely necessary.

Model Blending: Chai's Innovation for Diverse and Engaging AI Interactions

  • Chai pioneered "model blending," dynamically switching between multiple smaller AI models to create an experience that can rival a much larger single model, like a 175 billion parameter model. Tom explains they are "recreating the same kind of scaling law but on retention space." They found that models optimized for a single objective can become "a tiny bit sycophantic," offering high day-one retention but quickly becoming boring. By blending these with more assistant-like models, they achieve diversity. For example, a creative model might suggest teleporting to Mars, and an assistant model would then logically explain it.
  • Nishay, another Chai engineer, elaborates that they test 7-10 models per "blend" on a few thousand users, measuring retention rates to find the best performing combinations.
  • Actionable Insight: Model blending offers a cost-effective and potentially more engaging alternative to deploying massive, monolithic models. Crypto AI projects could explore decentralized networks of specialized, smaller models that can be dynamically combined.

The Dual-Edged Sword: Social Impact and Ethical Considerations of Companion AI

  • Will Bocham emphasizes Chai's intent to "make the world a better place," citing user emails claiming the platform "saved my life" by providing a space to be heard during times of depression and loneliness. He contrasts the perceived dangers of AI with the actual toxicity often found in random internet interactions, arguing AI is "an order of magnitude safer." However, he acknowledges the complexity, stating, "it's not possible to have good without some bad." Bocham believes in a long-run alignment between company and user interests, using Google's content moderation as an example of balancing freedom with safety.
  • Speaker Analysis: Bocham presents a balanced view, acknowledging potential harms while championing the positive impacts and the necessity of user-centric development.
  • Strategic Implication: The ethical considerations of AI companionship are paramount. Crypto AI projects in this space must proactively design for user well-being and transparent moderation, potentially leveraging decentralized governance for policy-making.

AI in Mental Health: Evidence and Regulatory Scrutiny of Therapy Chatbots

  • The podcast presents peer-reviewed data supporting the efficacy of therapy chatbots. A 2024 meta-analysis of 18 trials found AI chatbots reduced depression scores by about a quarter of a standard deviation and anxiety by a fifth. Another review showed moderate mood lifts. Specific studies, like a 2024 Canadian trial with the Wysa chatbot, also showed significant improvements. Regulators are taking note: Woebot's postpartum depression bot has FDA breakthrough designation, and the UK's NICE put Wysa on an early value assessment pathway. While effects are smaller than face-to-face therapy, they are "miles better than doing nothing."
  • Key Statistics: AI chatbots trimmed depression scores by roughly 0.25 standard deviations and anxiety by 0.20 standard deviations in a meta-analysis.
  • Actionable Insight: The growing body of evidence and regulatory interest in AI for mental health validates this as a significant application area. Crypto AI investors could look for projects combining AI therapy with secure, private data handling enabled by blockchain.

Navigating Content Moderation: Chai's Multi-Layered Approach to Safety and Freedom

  • Addressing content moderation on a platform with user-generated bots and private interactions is a major challenge. Bocham frames it as balancing user freedom with mitigating harmful use cases (the "3%"). Tom Liu details Chai's multi-layered strategy:
  • Users can flag public character scenarios, with top-reported ones undergoing manual review.
  • Hard rules exist for prohibited content, enforced by Chai's own models and techniques like regex (regular expressions), which are sequences of characters that define a search pattern.
  • For AI behavior, user feedback (e.g., "Do you think this message was appropriate?") is collected to train moderation AI, aiming to infer appropriateness and tune AI responses.
  • Quote (Will Bocham): "How do we limit or mitigate that 3% of use cases which are harmful...?"
  • Strategic Implication: Effective, scalable content moderation is critical for social AI platforms. Crypto AI researchers could explore decentralized moderation systems where community consensus, tokenomics, and AI play a role in maintaining platform safety.

Governance of AI: Elite Decisions vs. Collective User Preference

  • The discussion touches upon a TED Talk exchange between Sam Altman and Chris Anderson regarding the moral authority to develop world-shaping AI. Altman suggested that AI could "learn the collective value preference of what everybody wants rather than have a bunch of people who are like blessed by society sit in the room and make these decisions." Will Bocham echoes this sentiment, favoring a "bottom-up approach" where the community decides what is appropriate, aligning with Western ideals of individual sovereignty and democratic principles.
  • Actionable Insight: The debate over AI governance (centralized elite vs. decentralized collective) is a core theme for Crypto AI. Projects that empower users in shaping AI behavior and platform rules may find greater adoption and alignment with Web3 principles.

Lean and Potent: Chai's Strategy of Talent Density in Engineering

  • Chai operates with a remarkably small team of 13-14 engineers, all of whom are engineers, generating $30 million in revenue. Bocham emphasizes that "everything special we've ever done has been done by a very very talented engineer." They resist hiring large numbers, preferring a talent-dense team. This philosophy dictates a high hiring bar, rejecting "something like 80%" of L5 engineers (typically considered solid) if they lack the necessary drive and problem-solving mindset required for a startup.
  • Key Statistic: Chai has 13-14 engineers generating $30 million in revenue.
  • Strategic Implication: For Crypto AI startups, focusing on talent density rather than sheer team size can lead to greater efficiency and innovation, especially in a capital-intensive field.

Rapid Iteration and Experimentation: Chai's Agile Development Culture

  • Nishay, a Kaggle triple grandmaster at Chai, finds parallels between Kaggle competitions and Chai's model development. He describes Chai's environment as an "advanced version of Kaggle" where models are evaluated not just on a single score but on actual user behavior and long-term retention (e.g., after 7 or 30 days). Tom adds that their base rate is "one in five experiments succeed," necessitating a paradigm of rapid, simple experiments. The AI team is required to produce at least 10 different model blends for A/B testing weekly, focusing 80% of their time on "bread and butter" practical improvements.
  • Actionable Insight: The agile, data-driven experimentation culture at Chai is a model for Crypto AI projects. Fast iteration cycles and robust A/B testing frameworks are crucial for optimizing user experience and model performance in a rapidly evolving landscape.

Building for Scale: Chai's Custom Infrastructure for LLM Serving

  • To support their rapid experimentation and serve millions of users, Chai built its own infrastructure. Tom explains they use Kubernetes (an open-source system for automating deployment, scaling, and management of containerized applications) to orchestrate their cluster, along with custom load balancers. They have an automated pipeline for model deployment, including an in-house quantization loop (a process to reduce the precision of model weights, making models smaller and faster). Managing numerous models involves complex logic for activation, deactivation, and switching, which "all gets very very complicated."
  • Technical Term: Quantization in AI refers to techniques that reduce the number of bits required to represent a model's weights, leading to smaller model sizes and faster inference, crucial for efficient deployment.
  • Strategic Implication: As Crypto AI projects scale, developing or leveraging optimized infrastructure for LLM serving will be critical. There's an opportunity for decentralized compute networks to offer specialized, cost-effective solutions for AI model hosting and inference.

Bootstrapped Success: Chai's User-Funded Path to Profitability

  • Chai's funding strategy is contrarian in the VC-fueled tech industry. Bocham recounts early VCs not understanding their AI platform concept. Instead, Chai turned to its users: "As long as we delivered value to the users, we would then be financially rewarded." They reinvest 100% of revenue back into AI development. Bocham advocates for an engineering and product-led business, citing Nvidia and the Steve Jobs era at Apple as examples, believing the "scale of the company is proportional to the scale of the engineering talent."
  • Quote (Will Bocham): "You can get money from the people who are using your product, right? And that's your true customers."
  • Actionable Insight: Chai's bootstrapping success demonstrates that strong product-market fit and direct user monetization can be a viable alternative to VC funding, particularly for Crypto AI projects that can leverage tokenomics for early community support and revenue generation.

OpenAI's Strategic Pivot: GPT-4o and the Embrace of Companion AI

  • The podcast discusses OpenAI's recent shift with GPT-4o, its latest model, which appears optimized for human-like, engaging, conversational interaction rather than just information retrieval. This move is seen as a pivot towards companion AI, potentially influenced by the success of platforms like Chai. The speaker notes that GPT-4o feels "less like a tool and more like a companion." This specialization is further evidenced by a separate coding model (presumably GPT-4.1), suggesting OpenAI might be diverging from a purely general intelligence goal towards specialized models for different use cases.
  • Strategic Implication: OpenAI's entry into companion AI validates the market's immense potential. Crypto AI investors should anticipate increased competition but also a larger overall market, with opportunities for differentiated, decentralized, or privacy-focused alternatives.

The Eliza Effect Reimagined: Sophisticated LLMs and Simulated Empathy

  • The discussion references the Eliza chatbot from the 1960s, a simple program that mimicked a psychotherapist. Its creator, Joseph Weizenbaum, was horrified by how easily users projected understanding and empathy onto it. Today's vastly more sophisticated LLMs amplify this "Eliza effect," making simulated empathy a potentially profitable tool for engagement. While this can be used for charm, it also holds "genuine potential to improve lives." However, some users, particularly technical ones, may prefer utilitarian AI and find excessive "personality" or memory features a distraction.
  • Historical Context: The Eliza effect describes the tendency for people to attribute human-like intelligence and emotion to computer programs, especially chatbots, even when they know the program's responses are based on simple pattern matching.
  • Actionable Insight: The tension between AI as a tool versus AI as a companion presents distinct product development paths. Crypto AI projects should clearly define their target user and AI's role, whether utilitarian or relational, and consider the ethical implications of simulated empathy.

The Dawn of AI Social Networks: Engagement Metrics and Therapeutic Potential

  • The shift towards optimizing AI for engagement over time (e.g., average conversation length), as seen with Chai and potentially GPT-4o, marks an important step towards AI becoming a social network. The Harvard Business Review identified AI companionship and therapy as the number one use case for AI in 2025. The podcast recalls an interview with Daniel Kahn of Slingshot AI, highlighting therapy as a real category where AI can be effective, though not without risks, akin to real medicine having side effects.
  • Key Reference: Harvard Business Review identified AI companionship and therapy as the top AI use case for 2025.
  • Strategic Implication: The convergence of AI, social interaction, and therapeutic applications suggests a new category of "AI social networks." Crypto AI researchers can explore how decentralized identity, data ownership, and tokenized incentives could shape these future platforms.

The Expanding AI Landscape: Competition as a Catalyst for Innovation

  • Will Bocham views competition, including from giants like OpenAI, as "fantastic" because it "forces everyone to step their game up." He mentions DeepSeek's impactful model releases as a "wakeup call." He believes the AI market is vast enough to support many "multi-hundred billion, trillion dollar businesses," similar to how the video streaming market has multiple major players (YouTube, Netflix, Amazon Prime, etc.). Chai aims to stay ahead by continuously challenging itself.
  • Quote (Will Bocham): "I think there will be so many multi-hundred billion trillion dollar businesses that are being created [in AI]."
  • Actionable Insight: The AI sector is not a zero-sum game. Investors should look for companies with unique value propositions and strong execution, as multiple winners are likely to emerge across various niches within the broader AI landscape.

Conclusion: AI is Fundamentally About Human Connection

  • This episode underscores that AI's evolution is increasingly intertwined with human desires for connection, simulation, and companionship. Chai's early success and OpenAI's recent pivot with GPT-4o confirm a massive, growing appetite for social AI. Investors and researchers should focus on platforms prioritizing genuine user value and ethical engagement.

Others You May Like