Latent Space
January 17, 2026

Brex’s AI Hail Mary — With CTO James Reggio

Brex’s AI Hail Mary — With CTO James Reggio

By Latent Space

This summary is for builders moving beyond simple chat interfaces into autonomous financial systems. It explains how Brex restructured its entire engineering culture and tech stack to support a multi-agent network that slashed burn and 10xed operational speed.

  • 💡 How does Brex automate 80% of customer onboarding without human intervention?
  • 💡 Why did Brex abandon reinforcement learning for simple web research agents?
  • 💡 What does an AI-native engineering interview look like for a multi-billion dollar fintech?

Brex is no longer just a credit card company; it is an experiment in agentic finance. CTO James Reggio explains how the firm survived a period of intense burn by infusing LLMs into every corporate, operational, and product pillar.

The Agentic Org Chart

“It’s the agent org chart with my EA DMing other specialists.”
  • Hierarchical Agent Networks: Brex uses a tree-based structure where a primary assistant orchestrates specialized sub-agents. This modularity allows teams to iterate on specific domains like travel or policy without breaking the total system.
  • Multi-turn Collaboration: Agents communicate via natural language rather than simple RPC calls. This allows sub-agents to ask clarifying questions back to the user through the orchestrator.
  • Encapsulation Patterns: Software engineering principles are being projected into the agent space. Treating agents as microservices prevents the "God Model" problem where one prompt tries to handle too much complexity.

Operational Efficiency

“We want a decision within 60 seconds that’s fully touchless.”
  • Automated Underwriting: Brex replaced human-heavy KYC and fraud checks with research agents. This enabled the company to serve lower-margin commercial segments that were previously ROI negative.
  • Visual Prompt Management: Most operational AI tools are built in Retool to empower non-engineers. Domain experts can refine prompts and run evals without waiting for a developer.

The AI-Native Workforce

“This is just amplifying all the good and the bad in the industry.”
  • Mandatory Agentic Interviews: Every Brex engineer must pass a coding project that is impossible to complete without AI assistance. This ensures the entire team possesses the skills to manage high-velocity code generation.
  • Second-Order Slop: Rapid AI code generation increases the drift between team members and their understanding of the codebase. Senior engineers must pivot from writing lines to supervising architectural integrity.

Actionable Takeaways

  • 🌐 The Macro Shift: The transition from deterministic software to agentic networks. Companies are moving from rigid workflows to fluid systems that plan and execute autonomously.
  • ⚡ The Tactical Edge: Build an internal LLM gateway early. Centralizing model routing and cost monitoring allows you to swap providers as the model horse race changes without refactoring your product.
  • 🎯 The Bottom Line: AI is not just a feature but a fundamental restructuring of the corporate cost center. Efficiency gains allow a static headcount of 300 engineers to support a business growing 5x.

Podcast Link: Click here to listen

We have like three pillars for our AI strategy. We have our corporate AI strategy which is how are we going to adopt and buy AI tooling across the business and basically every single function to be able to 10x our workflows.

Then we have our operational AI strategy which is how are we going to buy and build solutions that enable us to lower our cost of operations as a financial institution.

And then the final pillar is the product AI pillar which is are we going to introduce new features that enable Brex to be a part of the corporate AI pillar of our customers. It's like we want to build features and be a solution that somebody else is saying to their board, hey we adopted Brex and this is part of our corporate AI strategy.

Hey everyone, welcome to the living space podcast. This is Allesio, founder of Colonel Labs and I'm joined by Swixs, editor of Blade in Space.

Hey, hey, hey. And we're here with Jio at Brex. Welcome.

Hey, thank you for having me.

Thanks for visiting from up in Seattle where I've been a little bit. It's cold up there, huh?

Yeah. And we have an atmospheric river hitting the city right now. So, a lot of.

Well, yeah, it's we're getting the full-on winter effect right now.

Well, you're here. We're talking about the sort of AI transformation within Brex. There's a lot of interesting tidbits that we're going to draw from your article but also your background.

You have got a wide array of experience from Stripe to Banter to Convoy and I think also mostly I'm interested in your journey as one of the rare people that have transitioned from like a mobile engineering leader to a CTO which I think is also a bit more rare.

I used to have this comment in the past where there's a career ceiling for people who work on client only things where usually they don't hit CTO whereas they typically promote the backend people the backend clouding for people the CTO.

Yeah. You know it's something that I hear fairly frequently because there aren't that many folks with a front-end background who reach this level leadership and it's exciting for me to be able to represent that group.

I'll say that even though my resume kind of reflects that I've been more on the front end of things, it's probably more my experience as a founder a couple times over that actually helped me get to this this level of my career working for somebody else.

Becoming CTO was very much like a leadership and general business role as much as it is a technical role. And so I think it was more the skills that I built from starting companies and trying to build those up made me a decent fit and enabled me to get the nod from Pedro to take this on as my predecessor left about 2 years ago.

Yeah. One thing I'm curious you guys commentary this is a little bit broad unscheduled but a lot of startups are bragging about how many ex-founders they have and yes to some extent you want people with the founder mentality and agency which is what what you did to be your employees and to to take initiative in the company but also I wonder if it's becoming anti-signal sometimes I don't know if you've thought about this.

I think it's more about the turn for me especially when people are hiring exfounders is like if you're truly of the founder gene. It's kind of hard to just stay somewhere. It's like an IC for too long and then it's like all right, I joined this thing and then in one year I'm back to being a founder.

I'm curious for you. What was your I'm sure you thought about leaving and like doing another company instead.

In fact, that was that was the alternative. I was considering even at the time that I got the phone call where they made me the offer to become CTO, I was thinking about leaving to go start a company.

And you know I think what's interesting about it we actually launched sort of like a new recruiting and employee value proposition for Brex a couple months ago called quitters welcome where we actually intentionally are leaning into this idea that we have a disproportionate number of folks who go on to become founders or like heads of a department when they leave our company and we celebrate that.

It's actually something that I'm very proud of and that means that we welcome in people who want to get a different experience.

I think that there's certainly a lot of founders who don't make it don't scale their own businesses to the scale that we've achieved at Brex. So there's something to be learned when they come in.

And then we're very happy to support people on their way out. And so I actually really like hiring former founders or future founders.

The one value proposition I find that's most relevant because a lot of the folks we're hiring as AI engineers are kind of folks that are either like winding down their companies or are considering maybe running AI startup.

The thing that resonates the most with them is that we often times can give them problems to solve that are interesting problems that maybe they even want to want to like build their own startup around but with instant distribution right like that that is the that is the allure is it's like you can come into this business and build like financial AI applications and instantly have that deployed to roughly 40,000 customers across you know the Fortune 100 down to you know tens of thousands of startups.

So that's what is I think appealing the founders but the challenge then is making sure that we set them up for success in an environment that still feels a little bit like the startup that they might build themselves versus like something that's too corporate.

Yeah. Instead of doing your own company and then coming to you and be like can I integrate into Brex get all the data. How's the engineering team structure?

Yeah. So we have about 300 people in engineering like 350 total across EPD and for the most part we structure around our product domains and so this means that Brex is a corporate card it's also corporate bank account it's expense management travel and accounting and so we we actually have sort of full stack product domains that are roughly like 30 40 people for each of those that have everything from like the low-level infrastructure up to the the web and mobile experiences.

That's generally like the structure of our engineering organization. And then we have naturally like a organization that focuses on infrastructure security and then there are two additional centers of excellence that we've kind of built that kind of violate that org design where we've felt the need to put more focus or like operate slightly differently.

And AI is one of those areas where we have another team of just roughly about 10 people who are focused primarily on LLM applications.

And we wanted to create a bit of a separation there because the way that we were thinking about this and this is actually something we did this summer is we paused and asked ourselves on our AI journey towards like infusing our product with AI and generating customer value.

We asked ourselves like what would a company that was founded today to disrupt Brex look like? And then we try to basically use the answer to that question to form this team internally. So it's a little bit off to the side.

Ideally everybody kind of comes up to speed and contributes you know LLM features but we have this sort of off on the side right now in a centralized manner.

What's the difference in AI adoption for those teams? So like are the people on the LLM team like much bigger cursor users, clock users or like do you see similar diffusion?

It's actually fairly uniform across the entire engineering department. It's actually kind of funny like one of our largest cursor users is actually an engineering manager.

And I think that this also just speaks to our core value of operate at all levels where we want all of our EMs and everybody in leadership to still basically do the job that they're managing manage the work.

So it actually is I think the journey of getting everybody into using agentic coding was not sort of exclusive to like the AI group.

Yeah. Um I in fact I think this podcast was actually set up because I cold outreach to Pedro because he tweeted this I assume this interex. He says I started a new company inside to build the future of agentic finance. No BS just builders building 96 and pushing production grade agents to 30,000 finance teams now 40,000.

Um, and then he actually has like a little job description which I think is really interesting. Uh, but I'll skip that and go straight to Brexit accelerated to grow 5x and cut burn 99% in the past 18 months. I assume that's a mix of internal AI automation and other stuff.

Um, but we're basically I wanted to put some headline numbers up front to impress people before we dig into the details.

Yeah, absolutely. And you're you're correct. That's the that's the team that we have this like AI team. You're actually what was that? Very young team.

Yeah, it's very young. I mean it's and it's been really interesting the the composition of the team is like very young like AI native like 20-year-olds who basically grew up with the tech kind of paired off with more like staff level software engineers that have been at a little while who can kind of navigate like the existing code bases and like understand the product and the customer deeply like we've formed these really couple of tight tightnit pods in the AI or where it's like three people generally somebody who has like more of a product to customer focused background that like staff engineer who knows where the skeletons are and then like a much younger like AI native engineer who can just do things with agents that like the rest of us dinosaurs maybe don't don't can't either dream of or like or where our I think I think part of it is like sometimes the too much experience or too much knowledge of how to solve a problem can actually be an impediment to thinking differently about it and thinking about it from like an AI first lens.

But yes, we we've been we've been slowly growing that team just in the same way that like a preede startup. You want to be very very careful about talent density and like very deliberate like only hire when you absolutely need it.

And so yeah, at this point it's just about 10 people and I think it was probably four or five people. Uh I think everybody was actually in the photo that was attached to that tweet when Pedro put that out a couple months ago.

Yeah, we'll put it up. It's a photo at 1:20 a.m. in a on a Friday.

Yes. Oh, yeah. Yeah. cuz we we always do we always do like Friday Friday demos and and like that's a time for everybody to get like kind of exec review time and so everyone's in Seattle.

Those folks were all in Seattle. Uh but they're actually geographically distributed. We have a couple folks here, a couple in S. Paulo, a couple in Seattle.

How addressible we have this like AI center of excellence which are basically the people running these teams across companies. Yep. How do you make the other engineers not feel like you're not special?

I think that's something that I hear a lot is like, hey, you know, why aren't these people working on all the Google LM things and like I'm stuck working on, you know, the KYC integration with whatever. Yeah. You know what I mean? It's like, how do you build that culture?

You know, it's interesting. I I thought that that would be more of a problem, but the benefit of having really optimized our engineering culture around business impact actually causes it to cut in the other direction where where folks some folks don't want to work on the AI products because doesn't have as much clear direct like business impact right now. Doesn't doesn't impact revenue as directly.

And so I I think folks for the most part we've we've enabled folks who have a strong desire to work on on AI products to to join that team like somebody somebody transferred out of our expense management organization to come over there because they're really passionate about taking like their knowledge of like policy evaluation and and bringing it into the the AI uh team.

But for the most part, I think everybody understands like how their work ladders up and maybe there's some like friendly rivalry because like the folks who say work on a card product, they they drive 60% of our direct revenue and so they they're pretty happy with that and they don't feel like they're being left out.

And I will also say as you probably saw in this this piece that we we put out with first round, there is a lot of smaller applications of LLM peppered throughout all of our product and operations teams. is just some of the more novel like agentic layer that sits on top of BS that has been put together like in this in this sort of isolated team.

So it's not like folks aren't getting to to build with LLMs or use LLMs on a daily basis.

Yeah. Maybe run people through the BS agent platform. We'll put the diagram in the video where you had the LLM gateway. You have like the whole MCP layer. We just had David the creator of MCP right before you. So this is very timely.

Um yeah, how did you start building that? What's the architecture?

Yeah, the architecture, you know, I think simple is elegant and we we've had basically an LL gateway and and a basic canrolled platform from the very early days.

In fact, right before being tapped to become CTO, I was leading like a AI labs team internally in the wake of like the announcement of chat GPT, you know, everybody saw this through technology and said, "Hey, what are we going to do with it?"

And so one of the first things that we did I think January 2023 that would have been was try to put together some internal infrastructure that made it possible for us to deploy pro deploy manage version and eval prompts and then be able to manage like data egress and model routing and have some very basic like observability and cost monitoring in an LLM gateway.

So that's that's infrastructure that we stood up and it still continues to power a lot of those smaller more let's say like precise applications of LLM.

So like for instance, we've set up a completely automated pipeline for evaluating customer applications to get them onboarded instantly to Brex, which is something that used to require human intervention either for underwriting or KYC, but now we basically have a series of agents and particularly like research agents that will go and do the work that humans would normally do.

And so that's running on top of this this handrolled framework. And then for the agents on BREs that we announced in our fall release, which is like this agentic layer that we're building that sort of sits on top of BS and can embody workflows that a finance team would normally hire humans for.

We've actually started using Mastra for that as like the kind of primary primary framework for for accelerating us. They actually have built everything in Typescript which is another like technology choice that's answers the question of like what would we do if we started Brex today but isn't the case for all of our existing backend code which is either cotlin or elixir and then we have we have a mix of PG vector pine cone and like I think what we've seen is we're always we're always re-evaluating the tech and framework choices as we go because the halflife of code has declined so significantly with agent coding it's actually quite easy for us and for anyone else to to kind of try on for size a variety of different pieces of tech to to figure out what is going to be most ergonomic for solving the problem.

Double click on Mastro that's a new choice an interesting one.

Yeah, I mean I think that the main the main reason that we adopted Mastro is that it provided the ergonomics that we were actually that the ergonomics of Master are quite similar to the internal LLM framework that we built 2 and a half years ago.

Whereas like Langchain was available at the time 2 and a half 3 years ago. it didn't quite feel right to us when we were trying to it it kind of addressed the things that weren't the the pieces that we we needed to address which was like being able to have really simple observability and and logging tracing lang chain didn't do it it I mean at that time it didn't I think it was really I think it was well they fixed that yeah no they certainly they certainly did but but but so like we we did I'm trying to remember because this is now ancient history we evaluated link chain turned off of it, built our own thing, and then as we were looking, we kind of want to deprecate this internal framework that we built because at the end of the day, it's not leveraged for us to maintain that.

Uh, and Master ended up fitting the bill for or the the feature set that we were looking for. And I think what what's been interesting is about half of the the applications that we're we're building right now on the the agent layer are running on Mastra and then the other half are actually still running on like yet another internally developed framework which is a framework that's focused more on networks of agents. So sort of multi- aent orchestration versus more like strict like you know single turn or like workflows which are easier to use like either langraph or mastra.

tell us about your multi-agent framework. I mean it's what are the design considerations? Um why why is this the first we're hearing about it?

Yeah. So it's funny a big big reason why we haven't written more about this is that it continues to evolve quite a bit and I I feel like we we actually had a blog post that we were going to put out in conjunction with the fall release talking about how we built this and by the time that we finished you know the blog post and had all the package ready it was already like halfway outdated.

And so the way that this has started to emerge is this multi- aent network approach to implementation was when we were trying to scale up our sort of consumer grade BS assistant.

So if you think about like Brex and our customers there's really like two very broad personas that we serve. We serve members of a finance team who are generally like going to be doing like in roles like accountant or controller or head of T& for those folks. they are going to be interacting with agents that are much more specific to their roles.

But then the other broad cohort of of users we have are like employees of companies that have deployed Brex. So you know you go join a new company that company uses Bre you get your Brex card. And our goal for employees is for Brex to completely disappear. Like the best UI UX for Brex is just the card. like every single thing that you have to do in the software beyond just swiping the card is like an opportunity for AI to eliminate some work for you.

And so what we thought was the right approach to solving that for that was to was to embody like an executive assistant for every employee because I as an executive at Brex I have an EA and she knows enough about me. She has access to my calendar, my email, has all the context on when I'm traveling and for what business purposes. And so she's basically able to do everything that I would be obligated to do in Brex, be it like booking travel or like doing expense documentation.

And so what we wanted to do is we wanted to build like that EA connected to the same data sources and see if we couldn't simulate that behavior so that you know you basically your interface to Bra's SMS in the card.

And when we started building that out, you know, the most naive like architecture for that would be to have an agent with a variety of tools and maybe maybe do some some rag to ensure that it has like appropriate context for the conversation.

But what we were finding is that the wide range of different product lines that exist on Brex made it difficult for one like agent to perform well being responsible from everything from like expense management to finding and booking travel to answering policy and procurement questions.

And so that's when we started breaking down the problem and into into a variety of sub agents that sit behind an orchestrator. And obviously this isn't something that can be implemented using langraph or master even has the notion of these as like network switches and beta.

But what we found is that it was easier for us when it came to being able to build evals for the system. we kind of just hit the eject button and built our own framework which is one in which we have agents that are able to basically DM with other agents and have multi-turn conversations amongst themselves to coordinate to to complete a task to or like to complete an objective.

And what's been nice about that is it means that like you can have your Brex assistant there's like one single one single like point of contact between you as an employee and the Brex product. Then behind your assistant if the company has like expense management turned on you have that. If they have reimbursements there's another agent for that. if they have travel attached to the red agent for that.

It actually also then facilitates like our conception here is that you know it's like generally like software encapsulation patterns taking like sort of projected into the agent space. It also makes it easier for us to have like the team that owns and understands travel like be the ones to go and iterate on that without needing to worry about like redressing the total system or needing like one team to own every single possible action you could take as an employee.

And I'll say that like I'm still of the mindset that somebody will build a great framework and we they ultimately migrate to it but or it might be us that we ultimately open source this right like but for us like this is this has worked out quite well in l of like a couple other approaches that we we tried along the way that just didn't perform well which was to you know overload the the the agent with a variety of tools or contextual like context switching where we try to say oh this conversation looks like it's more about reimbursement so let's like update the prompt with more reimbursement context like that was that was another approach that we took that didn't perform as well is actually having a reimbursement agent that it would collaborate with.

What about MCPS as like sub agents?

Oh yeah, that's another pattern. The key thing there is that we there's actually a lot of value in having like multi-turn conversations from like the orchestrator or the assistant to like the sub agent whereas like you know a tool call is basically just like one RPC.

And so oftentimes what will happen is you know let's say let's say the the user reaches out to their Rex assistant and says hey like am I allowed like how much am I allowed to expense per person for dinner tonight? I'm taking my team out.

And the the you know your assistant's going to then reach out to the policy agent. Maybe the policy agent needs to know in order to answer that question. Maybe it needs to know like whether this was was like a customer event, a team event or whether you're traveling.

And so it may actually send instead of it can't just answer the question. So it's going to reply back to the the assistant and say hey I need you to ask this clarifying question. And so then the assistant will return to the user ask clarifying question and then they'll basically have this sort of multi multi-turn conversation across multiple agents versus it just being encapsulated in like a single call and response tool call.

And so there are still like all the all the sub aents have a ton of tools, but I I think of like the MCP and and tool usage as being like the interface to all of our conventional imperative systems, not not the AI space.

Yeah, that's the conversation we were having earlier whether or not it should be an agent to agent protocol as well or like Yeah, there should be like a chat back. Exactly. Exactly. And that's the thing is like Okay.

And one of the ways that we actually grafted this into Astro before we we built our own framework was to was to make every sub agent a tool. And then the input was just natural language, the output was natural language and the if you needed to have multi multi-turn you would basically just put the full like conversation and as you kept calling calling the sub agent as a tool and it's just like at that point you're like okay the ergonomics are kind of the framework framework is fighting me on this. It's actually helpful for us to basically conceive of it as an org chart and like it's the agent or chart with you know my EA is DMing other specialists and having brief conversations to support me as their client.

Yep. That was a really good deep dive. Thanks for indulging. I feel like you guys are not afraid to make your own tech which I think is a competitive advantage. I really like that culture.

Maybe I we should go a bit breath first as well. Of course, I think we also deep dive a little bit too much in in one area. There's um and we'll we'll put up the chart, but I'm also very interested in like the sort of internal agent stuff, the operational stuff and just the general platform scope. So, please feel free to just like go into your spiel on it.

Yeah, of course. So, one of the things that I was trying to do at the beginning of the year, as CTO, you know, I think it really fell to me to articulate what our AI strategy was as a business. you know, every every board of director was, you know, or every me every member of our board was like, "Hey, what's your AI strategy?" And while we were doing a lot of duties, we literally go, "He's got it." Well, yeah. Uh Yeah.

And and if I didn't, I I'd be in trouble. I think he also was counting on me, given that I was doing the AI organization before CTO to to have That's true. But but a big part of it was like we we were doing a lot with with LLMs. um it was more like these little one-off features and you know, hey, like maybe mix in some suggestions here or maybe do a little bit of ops automation over here. But it wasn't um it wasn't easy to to kind of create like a verbal framework um of all of these investments and without that framework then we weren't able to like set a set a vision or a road map for for investments.

What we did at the beginning of the year is we took everything that was going on as well as all of our ambitions, all of the good ideas, as well as like the problems we were trying to tackle as a business this year, throw it all on the table and see if there were some ways to cluster it into a framework that made sense to the business, to our board, uh to ourselves.

And we came up with, I think this is not particularly novel, but has helped us quite a bit.

We have like three pillars for our AI strategy. We have our corporate AI strategy which is how are we going to adopt and buy AI tooling across the business in basically every single function to be able to 10x our workflows.

Then we have our operational AI strategy which is how are we going to buy and build solutions that enable us to lower our cost of operations as a financial institution because I think it it's fairly intuitive like financial institutions like ours face a lot of regulatory expectations and there's just like a high ops burden for running our business and so it's sort of like a lot of kind of internal use cases like being able to do like fraud detection underwriting KYC be able to handle dispute automation on car transactions. Those those types of operational investments or our ops AI pillar.

And then the final pillar is the product AI pillar which is like are we going to introduce new features that enable Pre to be a part of the corporate AI pillar of our customers. It's like we want to build features and be a solution that somebody else is saying to their board, hey, we we adopted Rex and this is a part of our corporate AI strategy.

Yeah. And so it's it's kind of has this nice little feedback loop and we we basically within the company split you know did a little bit of divide and conquer where folks in IT and on our people team were more or less spending more of the effort driving on corporate AI really like looking for making the procurement decisions like creating a culture of experimentation where we spotlight and incentivize people for trying to sort of improve their personal workflows using AI.

And then the the pieces that I've been more involved in have been operational and product and we were just talking about products here which is like the agents on Brex and stuff but I think that the operational AI investments have been some of the the most sort of immediately impactful to the business because we have hundreds of people who work in our operations organization and it's actually something that differentiates us because our seesat and the quality of our our support and service is very very high something we're very proud of and so trying to figure out how can automate significant portion of this and and use LLMs in a way that doesn't degrade the customer experience and then also kind of addresses like what is the future of the roles of the people who we already have working full-time for us.

So this is where Camila our COO who kind of co-wrote the the piece with First Round with me she's been leaning really aggressively to help every member of the operations organization start rethinking their role as being not people who kind of execute against an SOP but are people who are going to like build prompts, build evals and like become more AI native and like the way that they do work.

And so a lot of the engineering we've done has been to enable folks say in fraud and risk to be able to refine prompts and and add additional automation to their workflows.

Yeah. And this secret force pillar, the the platform. Yeah. Yeah. Exactly. That is the that is the thing that ties it all together. Exactly. Is is the is the platform.

And I think what's been really nice is that even though the platform is kind of a loose loose loose term because it consists of a wide variety of technologies as I said like we haven't been too religious or dogmatic about everybody needing to be on one particular thing.

What we've seen is that by making a variety of sort of ergonomic options for building with loss available, it like really has made it easier for for us to make a quick leap forward on operational AI. Like we as soon as we put our mind to it and we said like look no we want to hit 80% automated acceptance rate for all all startup and commercial businesses that apply for Brex like we want a decision within 60 seconds that's fully touchless no humans involved.

We were able to break that down and then actually build the build the agents build the tools on top of that platform really quickly and and a lot of those tools are the same tools that our product AI agents use as well.

I was pretty sold on the conductor. I don't know if this is under exactly the bucket, the conductor one provisioning command. I was like, "Yep, I want that."

Yeah, that was actually I'd love to talk about that. So, that's that's actually on the corporate side. And I think that this goes back to maybe another intuitive but but I'd say like bold decision that we made, which is that we're not going to we're not going to try to pick winners in the horse race between the foundational model providers or the the agent coding tools or like basically anywhere where there's there's an active horse race.

What we do in instead of like trying to pick a single solution is we will procure like a a small number of seats like multiple solutions and then we'll give employees the ability to pick whatever one they want to use. And so for instance like we allow employees to basically go to in Slack and use conductor one to get a chat GPT a claude or a Gemini license and basically you can just like build your own stack where you pick your you pick your like chat chat provider as a as a dev you can pick um you know between like cursor wind surf cloud code credits like and and you can basically craft your your stack to your preference and easily switch between them.

And what that does for us too is when we're going to like obviously we have sort of enterprise agreements in place for all of them for the sake of like the you know the the privacy and non-training guarantees. But it's fun because when we go to renew these contracts it it we can basically resist the need to like do a wall-to-wall deployment. We can say hey look like usage trends it our our employees are voting with their fee. They're voting with their dollars and you know maybe maybe your tool isn't as as hot as it was a year ago.

Does it give you a dashboard of what people are choosing?

Yeah, actually we look at that. We were looking at that as we're going into budgeting over next year. Very interesting. I would love to see that those what what's you know anything that's like really up anything that's really down. It's fascinating how how different the landscape is every every three three months.

And I think one of the one of the interesting challenges we had early on was getting folks to just like try these tools, try to incorporate like a coding. Yeah. I like early on I say like 12 to 18 months ago now like get folks to to just take the time to try a new workflow. And now at this point I think what we're seeing is like even if you know a new model hits the same like um when when Codeex came out and everybody's like oh Codeex is is better at at codegen but it's a little bit slower. like I find fewer folks are like kicking the tires on new things because like the they're just so comfortable with the ergonomics of their current workflow that that um you know some folks are just like I want to just stick with Cloud Code cuz I know it now. I've been working with it for like 9 months. So I don't need to to keep keep switching. I don't need I don't feel the incessant need to keep trying new things because I've I've gotten I'm an iPhone person and I'm just like going to stay with an iPhone even, you know, even though there's some really sexy Android hardware out there.

Do you have one of the big numbers like 80% of all of our code is written by AI or but how do you measure it internally?

Yeah. No, not really. We we I mean I what we do is we'll we'll measure like the attributions on the the number of commits that that have the like co- co-authored with um and we pull some of those stats but I don't index heav like in fact I don't index on those

Others You May Like