
Author: Al Harris, Amazon Kiro
Date: By AI Engineer
Quick Insight: This summary is for engineers moving from AI prototypes to production systems. It details how structured requirements and formal verification turn unpredictable LLM outputs into reliable software.
Al Harris, Principal Engineer at Amazon, introduces Kiro. This agentic IDE moves beyond simple chat interfaces by enforcing a compressed software development life cycle.
"[Vibe coding] relies a lot on me as the operator getting things right."
"I think people don't do enough is use their MCPs when they're building their specs."
"You can tweak the process to work for you, not just the process that I think is the best one."
Podcast Link: Click here to listen

For those of you who haven't heard of us, Kira is an agentic IDE. We launched generally available this most recent Monday, I think the 17th, but we launched public preview in July, I think July 14th. So, out there for a few months getting customer feedback, all that good stuff. We're going to talk a little bit about using Spec-Driven Development to sharpen your AI toolbox. I did a show of hands. About a quarter of the people here familiar with Spec-Driven Dev. My name is Al Harris, principal engineer at Amazon. I've been working on Curo for the last. We're a very small team. We were basically three or four people sitting in a closet doing what we thought we could do to improve the software development life cycle for customers. So we were charged with building a development tool that improved the experience for Spec-Driven Development. We were theoretically funded out of the org that supported things like QDV, but we were purposefully a very different product suite from the QE system to just take a different take on these things. So we wanted to work on scaling, helping you scale AI dev to more complex problems, improve the amount of control you have over AI agents, and improve the code quality and maintain reliability, I should say, of what you got out the other end of the pipe.
Now we're back to new content. So our solution was Spec-Driven Dev. We took a look at some existing stuff out there and said, "Hey, vibe coding is great, but vibe coding relies a lot on me as the operator getting things right. That is me giving guardrails to the system. And that is me putting the agent through a kind of a strict workflow." We wanted Spec-Driven Dev to sort of represent the holistic SDLC because we've got 25-30 years of industry experience building software, building it well, and building it with different practices. We've gone through waterfall at XP. We have all these different ways that we represent what a system should do, and we want to effectively respect what came before.
So this animation looked a lot better. It was initially just the left diamond, but the idea was, you know, you basically are iterating on an idea. I think like half of software development is discovery requirements. And that discovery doesn't just happen by sitting there and thinking about what should the system do? What can the system do? We realized though, kind of working on this, that the best way to make these systems work is to actually synthesize the output and be able to feed that back really quickly. Things like your input requirements to actually do the design and feedback, realize, oh, actually, if we do this, there's a side effect here we didn't consider, we need to feed that back to the input requirements.
So this compression of the SDLC evolved to bring structure into the software development flow. We wanted to take the artifacts that you generate as part of a design, that's the requirements that maybe a product manager or developer writes, that's going to be the acceptance criteria, what does success look like at the end of this. And then we want to the design artifacts that you might review with your dev team, you might review with stakeholders and say this is what we're going to go build and implement the thing, and we want to make sure that you can do this all in some tight inner loop. And ultimately that was initially what Spec-Driven Dev was. What Spec-Driven Development in Kira is today, or at least was before it went GA, was you give us a prompt and we will take that and turn it into a set of clear requirements with acceptance criteria. We represent these acceptance criteria in the EARS format. EARS stands for the easy approach to requirement syntax. And this lets you really easily, it's effectively a structured natural language representation of what you want the system to do.
Now, for the first four and a half months this product existed, the EARS format looked like kind of an interest decision we made, but just that sort of interesting. And with our launch, our general availability launch on Monday, we have finally started to roll out some of the side effects of which is property based testing. So now your EARS requirements can be translated directly into properties of the system which are effectively invariants that you want to deliver. For those of you who have or like have not, I guess, done property based testing in the past using something like I think it's a hypothesis in Python or fast check and node closures spec library is another example. These are approaches to testing your software system where you're effectively trying to produce a single test case that falsifies the invariant that you want to prove. And if you can find any contrapositives, then you can say this requirement is not met. If you cannot, you have some high degree of confidence where the word high there is doing a little bit of heavy lifting because it depends on how well you write your tests, but you can say with a high degree of confidence that the system does exactly what you're saying it does.
Yeah, so a property, we'll get a little bit more into property based testing and PBTs a little later, but this is the first step of many we're taking to actually take these structured natural language requirements and then tie this with a throughline all the way to the finished code and say if your code, if the properties of the code meet the initial requirements, we have a high degree of confidence that you have reliably shipped the software you expected to ship. So with Spec-Driven Dev, we take your prompt, we turn it into requirements, we pull a design out of that, we define properties of the system, and then we build a task list and we go and you can run your task list. Effectively the spec then becomes the natural language representation of your system. It has constraints, it has concerns around functional requirements, non-functional requirements, and it's this set of artifacts that you're delivering.
So I don't think I have the slide in this deck, but ultimately the way I look at spec is that it is one a set of artifacts that represent sort of the state of your system at a point in time t. It is two a structured workflow that we push you through to reliably deliver high-quality software and that is the requirements design and execution phases. And then three it is a set of tools and systems on top of that that help us deliver reproducible results where one example of that is property based testing. Another example of that which is a little less obvious but we can talk about later is going to be, I don't even know what to call it, requirements verification. So we scan your requirements for over ambiguity. We scan your requirements for invalid constraints, e.g., you have conflicting requirements, and we help you resolve those ambiguities using sort of classic automated reasoning techniques.
And I could talk a little bit more about sort of the features of Kira. I think that's maybe less interesting for this talk because we want to talk about Spec-Driven Dev. We have all the stuff you would expect though. We have steering which is sort of memory and sort of cursor rules. We have MCP integration. We have, you know, image yada yada. So we have ways to and we have software hooks. So let's talk a little bit about sharpening your tool chain. And I'm going to take a break really quick here. Just pause for a moment for folks in the room who had maybe tried downloading Curo or something else and just say are there any questions right now before we dive into how to actually use spec to achieve a goal? No questions. It could be a good sign. Could mean I'm not talking about anything that's particularly interesting.
So I actually want to like talk in some concrete detail here. This is a talk I gave a few months ago on how to use MCPs in Kira. And so one of the challenges that people who had tested out Kira had that might be a little easier to see was that they felt that the flow we were pushing them through was a little bit too structured like you don't have access to external data, you don't have access to all these other things you want. And so one thing that we said on our journey here towardsing your, oh, you know what, this out of order here's my nice AI generated image. So you can use MCP. Everybody here I assume is familiar with MCP at this point. But Curo integrates MCP the same way all the other tools do. But what I think people don't do enough is use their MCPs when they're building their specs. And so you can use your MCP servers in any phase of the Spec-Driven Development workflow. That's going to be requirements generation, design, and implementation. And you can use, we'll go through an example of each.
So, first of all, to set up a spec in Curo is fairly straightforward. We have the Curo panel here, which there's a little ghosty, and then you can go down to your MCP servers and click the plus button. You can also just my favorite way to do it is to ask Kirro to add an MCP and then give it some information on where it is and it can go figure it out usually from there or you just give it the JSON blob and it'll figure it out. Once you have your MCP added, you'll see it in the control panel down here and you can enable it, disable it, allow list tools, disable tools, etc. So you can manage context that way. Worth noting changing MCP and changing tools in general is a caching operation. So if you're very deep into a long session, maybe don't tweak your MCP config because it will slow you down dramatically.
But let's talk about MCP inspect generation. So something I the Curo team uses a for reasons I don't know, but it's our task tracker of choice. But so one thing I want to do is maybe go and say I don't want to write the requirements for a spec from scratch. My product team has already done some thinking. We've iterated in Asana to kind of break a project down. This is not always how things work, but sometimes how things work. So in this case, I have I have a task in Asana. Oh no, I did the wrong thing. That's what I get for zooming. So I have this task in in Asana that says add the view model and controller to this API. In this case, this was a demo app that I can figure in a few minutes. And we even had like it's kind of peeking under here, but we had some details about what we wanted to have happen. Now I can go into Kira and just say start executing task XYZ URL from Asana and Kira is going to recognize this is an Asana URL. I had the Asana MCP installed. It goes and pulls down all the metadata there. Da da da. So it's going to break out and from there start start determining what to work on.
Oh it's funny these titles are backwards. basically create a spec for my open Asana tasks. Again, go pull from Asana all the tasks and then for each one generate requirements based on those tasks. So I think I had like six tasks assigned to me. One is do user management, do some sort of property management da da da it pulled them in generated the requirements and then in this case title is wrong apologies start executing task. this is I want to go and do the code synthesis for this and I will take a quick break here to talk about how you can do this in practice. So for those of you who are you know following along in room feel free to fire up your curo open a project and then picking a an MCP server. I'll share a few repos here really quick that you can play around with. So I have an MCP server implemented. I have this lofty views which I think implements the Asana. And then these should all be public. Let me just double check. Yeah. Okay.
So for example, if you wanted to extend my I have a Nobel Prize MCP which curls perhaps unsurprisingly there is a Nobel Prize API. So you can use UVX to install it or you can get clone this Al Harris at Nobelmcp. This is just one example. Another one here is if you want to play around with the sample that's in the video. I have Al Harris at lofty Views. I'll leave these both sort of up on the screen for a few moments for folks who do want to copy the URLs. But while that is happening, oh no, let's put you on the same window. So what I'll demo quick is the usage of an MCP to make like spec generation much easier or more reliable. So here I have let's see Got a lot of MCPs. Which ones do I actually want to use? Let's use the GitHub MCP. Oh, no. Ignore me. That's better.
Okay. Well, I have the fetch MCP. So in this case I could for example come in here and say hey I've generated a bunch of tasks lofty views app. This is basically a very simple CRUD web app. But I want Kira to use the fetch MCP to pull examples from similar products that exist on the internet. You could also use you know Brave search or Tavlet search MCP servers but in this case I'll just use fetch because I've got it enabled. So let's say, oh actually we can run the web server and use fetch. That's a good example. This is one example of you can at any point in the workflow generating a spec go through and you know use your MCP servers to get things working. No, this is what I get for not using a project in a while. We'll cancel that. We can actually do something a little more interesting which is a separate project I've been working on.
So I've been working on a an agent core agent and that might be I I know the project works, which is the reason I'll fire it up here. Should I call it? Well, maybe we'll do live demos at the end. So that's sort of like the most basic thing you can do with Kira is just use MCP servers, but any tool uses MCP servers. I actually don't think that's particularly interesting. So let's say in sort of this process of trying to sharpen our our spec dev toolkit, we've finished up with the 200 grit. We've added some capabilities with MCP. It's useful, but it's not going to be a gamecher for us. I want to come in here and actually get up to the 400 grit. Let's get start to get a really good polish on this thing. I want to customize the artifacts produced because you've got this task list, you've got this requirements list and I don't agree with what you put in there, Al. You could say that a lot of people do and I that's a a great starting point.
So, here's something I heard earlier in the week at, you know, earlier in the conference is that people like to do things like use wireframes in their mocks. Use wireframe mocks because in your specs are natural language, you're using specs as a control surface to explain what you want the system to do. Therefore I want to be able to actually put UI mocks in here. So the trivial case is that I just come in here and say Kuro's asked me here does does the design look good? Are you happy? And I said this looks great but could you include wireframe diagrams and ask you for the screens we're going to build here. I'm adding this is again from that lofty views thing. I'm adding a user management UI but I want to actually see what we're sort of proposing building not just the architecture of the thing. So your cure is going to sit here and churn for a few seconds, but you can add whatever you want to any of these artifacts because they're natural language. So they're structured, which means we want some reproducibility in what they look like, but ultimately what they look like doesn't matter because we've got the the any machine here, the agent sitting that can help translate it to what it needs to be.
So Kira's churning away here. It's thinking thinking and then it's going to spit out these text wrapped asy diagrams. I'll fix the wrapping here in a second in the video, but ultimately like you know it does whatever you want. So if you want additional data in your requirements, you can do that. If you want additional data in the design like this, you can easily add that. Here we've got sort of these wireframes in ASKI that help me sort of rationalize what we're actually about to ship. And then I can again continue to chat and say actually in the design I don't want, you know, maybe I don't want this add user button to be up at the top the entire time in which case I could chat with it to make that change easily and now we're on the same page up front instead of later during implementation time. So we've again sort of left shifted some of the concerns.
So that's one example. You know, I want to add UI mocks to the design of a system. Another example though could be this. Oh, this is a just a quick snapshot of the end state there where now my design does have these UI mocks. But another example that I actually like a little bit more is this including test cases in the definition and tasks. So today the tasks that cure will give you will be kind of the bullet points of the requirements and the acceptance criteria you need to hit. But I want to know that at the end state of this task being executed, we have a really crisp understanding that it is correct. It's not just like done because the anybody who's used an agent can probably testify that the LLMs are very good at saying I'm done. I'm happy. I'm sure you're happy. I'm just going to be complete. Oh, yeah. The tests don't pass but they're annoying. I tried three times them to work. I'm just going to move on. No, I don't want that. I want to actually know that things are working.
So, in this case, I've asked Hero to include explicit unit test cases that are going to be covered. So my task here for example in create creating this agent core memory checkp pointer is going to have all the test cases that need to pass before it's complete and then I can use things like agent hooks to ensure those are correct. We'll run this sample a little later in the talk. This is the thing I'm ready to little demo. Yeah, so this is another example where you can again you're you're working on your toolbench. You're sort of you have all these capabilities and primitives at your control and you can tweak the process to work for you, not just the process that I think is the best one.
And then sort of last but not least, the 800 grit. At this point, we're getting a final polish on the tool. We might be stropping necks, but we want to, you know, you can iterate on your artifacts, but you can also iterate on the actual process that runs. So, one thing you might have, and I do this a lot, is I'll be chatting with Kira, and I say, "Hey, I want to in this case, I want to add memory to my agent in agent core. Let's dump conversations to an S3 file at the end of every execution." Cur is going to say, "That's great. I know how to do that. I'm going to research exactly how to do that thing. I will achieve this goal for you." But ultimately what I've done is actually introduce a bias up front which is I'm steering the whole agent using S3 as this storage solution just because maybe I'm familiar with it but it's probably not the best way to go about it.
So then after it had synthesized the design and all the tasks and all this stuff I came back and said well like we don't need to stick to this rigid Spec-Driven Dev workflow that I've that has been defined by Kirao. I can ask for alternatives like is this the idiomatic way to achieve session persistence? I don't know maybe there's a better way. Maybe if we're talking AWS services, it's not S3, it's Dynamo or yada yada. Kira's going to come in here and say, you know, good question. Da da da. Let me research. It's going to go through call a bunch of MCP tools that I've given it access to. This kind of ties back to that you should be using MCP. And then it comes back with this recommendation that I didn't know was a feature, which is Asian core memory. It says it's more idiomatic and future proof that maybe is TBD and should be checked a little closer, but or you could use S3, which is the thing you recommend. Now, actually, I I bet there's far more than two options here. So, you could probably keep asking the agent, are there other options, yada yada, and it would go and continue to investigate, but you should not lock yourself into the rigid flow that is sort of the starting point here.
Yeah. So, that that's actually I think it for my deck. What I will talk about is let's just run through that sample I just had up there which is that so basically let me delete delete it and I'll just do a live demo of sort of specs in Curo and how we can fine-tune things a little bit. So this project is a Node.js app. It is a it's a CDK. Again, I'm not trying to sell more AWS. This is just the technologies I'm familiar with, so I can move a lot more quickly. So, I wanted to know a little bit about agent core, which is a new AWS offering. And as somebody building an agent, I should probably be familiar with it. So, and I'm not familiar enough with it. So, I've got we've got some other people here who know a lot about it. So, put my hand up a little bit and you know, you caught me.
So, I set up a CDK stack, which is just you know, IA technology to deploy software. I'm familiar with it and I love it. So, I have a stack here that lets me deploy whatever an agent core runtime is. I don't know. I asked Kira to do it. We vibe coded this part. So we vibe coded the general structure. We got an agent. We got IA set up. I then vibe code added commit lint. I added husky. A few things like this that I like for my own TypeScript projects. Prettier and eslint I think. So we have a basic product here or like a basic project here that I know I can deploy to my personal AWS account. Now I'm going to come in here and oh, and then importantly, this is super important because I don't know how the hell agent core works. And I could go read the docs, but the docs are long and they're complicated and I'm really just trying to build out a PC to to like learn about it myself.
So, I added two MCP servers. Oh, no, maybe I didn't. Let me check. Oh, okay. Yes, sorry. Buried down here at the bottom. So this is my Kira MCP config. I added one important MCP server here which is the AWS documentation one. There's other ways to get documentation. You can use things like Tessle level 7 but in this case this is vended by AWS. So I have some confidence that it might be correct. So I used this to help the agent have knowledge about sort of what technologies exist. And I think I used fetch quite a bit as well. So these are the two sets of these are the two step sets of MCP servers I provided the system. That's great. Move on. Confirm. So and I'll just rerun this from scratch.
So what I had done yesterday evening or maybe the evening before was I sat down and I have this system sort of basically working and now I want to start doing Spec-Driven Development. So, I want to add this session ID concept and then I want to read conversation to an S3 file blah blah blah. This is the whole sort of bias thing I showed you earlier. We're going to fire that off through Curo. It's going to start running chugging away and then it's going to, you know, see if the spec exists. Okay, the folder does exist. It's probably going to realize there's no files there and start working away. But, from here I'll sort of live demo. It's going to read through require. It's going to read through existing docs. It's going to read through existing files, gather the context it needs. Sure, in a way. But in a moment once it generates sort of the initial requirements and design, I am going to challenge it to use its own, you know, MCQ servers. I want you to go and do some research on the best way to do this and provide me some proposals. And this is why I was hoping to get the clip on mic working because I've got to set this down for a moment.
Okay. So, you know, I don't know if this is the best way to do this. Go read docs, go use fetch. D. It's going to keep kind of churning away here and then come back to me after it's probably got a few ideas and proposed it. But, this is an example of me just using additional capabilities. use fetch, use the docs MCP, use whatever you can to get the best information and don't take at face value the things that I said. These are usually things we have to prompt pretty hard to get the agent to do, but if you're doing it in real time, it works fairly well. Again, the agent, all of these agents are going to be very easy to please. So, you know, just cuz I said something in the stupid docs, it may or may not actually be the most important thing from the agents perspective down the road. So, you know, okay, so it's done a little bit of research. It understands the lang graph which is the agent framework we're using already has this knowledge of persistence da da da and actually in this case it didn't find it did not use the MCP for agent core docs who didn't find that agent core has this knowledge of persistence so maybe you like let's assume I don't I still don't know that exists because I didn't dry run this a few days ago we might have to find that later the design phase so first thing it's going to do is kind of iterate over all my requirements requirements here.
You know, it's changed the requirements based on what it now knows about Langraph and how it can natively integrate with the checkpointing, but it's still really crisply bound to this like S3 decision that I made implicitly in the ask. So that is just something to be aware of. Anything you put in the prompt is effectively rounding the agent. For better or for worse. I see it's still iterating. So, yeah, comes through says, does this look good? We changed duh. I'm going to say looks great. Let's go to the design phase. So now Curo is going to take my requirements and take me into the design phase of this project. I can make this so things are a little bit bigger. But here's an example of what I meant by these EARS requirements. So the user story here is as a dev I want to implement a custom S3based checkpoint so the agent can use Langraph's native persistence mechanism with S3. Great. That sounds reasonable to me as a person you know sort of co-authoring these requirements. This here, this sort of when then shall syntax. This is the years format and the structured natural language is really important for us to pass this through non LLM based models and give you more deterministic results when we parse out your requirements because ultimately our goal is to actually use the LM for as little not as little as possible but less and less over time. We want to use classic automated reasoning techniques to give you high quality results not just you know whatever the latest model is going to tell you.
So here's gone through spits out a design doc. Let's actually just look at this in markdown. This sure you got a server da da checkpo pointer ghost s3 that makes sense pseudo code again in a real scenario. Maybe I read this a little bit more closely and what's actually this is the new thing we shipped in on the 17th is that now cur is going to go through and do this formalizing requirements for correctness properties. And so right now what the system is doing is it's taking a look at those requirements you generated the requirements we agreed upon with the system earlier. These look good. I agree with them. yada yada. It's taking a look at the design and it's extracting correctness properties about the system that we want to run property based testing for down the road. This is something that may or may not matter for you in the prototyping phase but should matter for you significantly when you're going to production. because these properties are correct and these properties are all met. The system aligns one to one with the input requirements you provided.
Yeah, so while this is chugging away, any questions yet? Any folks kind of curious about this? Yeah, we're here and then there. What would you say is the main difference between that has? I haven't used the planning mode in a couple of weeks. So it's I'm things move so fast it's a little wild. But I think ultimately what we would say is that Kuro's spectrum and dev is not just LLM driven but it is actually driven by like a structured system. And so planning mode I'm not sure if there's actually like a workflow behind it that takes you through things but yeah this is our take on it for sure. I'm not familiar enough to give like a more concrete example unfortunately. similar I mean it doesn't give you like this I think that this document is cool is bringing you the school but what Cer does is to basically create you a plan that's just an execution plan okay oh I see so I think that the fundamental difference there does that plan get committed anywhere or is it just ephemeral it's kind of okay so what I want over time is not is not just how we make the changes we care about but it is actually the documentation and specification about what the system does.
So the long-term goal I have is that as Kira we were able to do sort of a birectional sync that is as you continue to work with Kira you're not just acrewing these sort of task lists and so I'm just going to say go for it to go to the tasks but we're not just acrewing task list but actually if I come back and let's say change the requirements down the road we will mutate a previous spec. So I'm looking at really just a diff of requirements which as you go through the green field process you're going to produce a lot of green in your PRs which is maybe not the best because I'm just reviewing three new huge markdown files but on the next time or the subsequent times that I go and open that doc up I want to be seeing oh you've actually you know you've relaxed this previous requirement you've added a requirement that actually has this implication on the design doc that is the process the curo team internally uses to talk about changes to the curo So we review our design docs have in general been replaced by spec reviews. So we will you know somebody will take a spec from markdown they'll blast it into our wiki basically using an MCP tool we use internally and then we'll review that thing and comment on it in sort of a design session as opposed to you know I wrote this markdown file or a wiki from scratch. So it becomes sort of if well it's actually not like an ADR because it's not point in time. It is like this living documentation about the system. But yeah thanks for the question. There's one over here.
This may be more a spectrum development question but are there like like is there like a template for a set of files that you fill out? Like right now you're in the design.md. Are there like is this is the designd the spec and it's a single doc or are there oh great question. So the yeah the question was are there and correct me if I'm wrong here but question is are there a set of templates that are used for the system and is the question you're driving at can you change the templates or is just are there okay so the yeah question is are there a set of templates there are implicitly in our system prompts for how we take care of your specs so you'll see here at the top navbar here right now we're really rigid about this requirement design task list phase but we know that doesn't work for everybody for example if you're starting we get this feedback from a lot of internal Amazonians actually that I want to start with a I have an idea for a technical design and I don't necessarily know what the requirements are yet but I know I want to make maybe design is even the wrong word I want to start with a technical note like I want to refac this comes up a lot for refactoring actually so I want to refactor this to no longer have a dependency on here's a good example here we use a ton of mutxes around the system to make sure that we're locking appropriately when the agent is taking certain actions because we don't want different agents to step on each other's toes. But maybe I want to challenge the requirements of the system so I can remove one of these mutexes or semaphors I should say. So I might start with something like a technical note and then from there sort of extract the the requirements that I want to share with the team and say hey you know I had to kind of play with it for a little while to understand what I wanted to build but I still want to generate all these rich artifacts. So today it's this structured workflow. We're playing a lot around with making that a little bit more flexible. But the the structure is important because the structure lets us build reproducible tooling that is not just an L. So I think that that's an important distinction we make is that our agent is not just an LLM with a workflow on top of it. The backend may or may not be an LLM or it may or may not be other neurosymbolic reasoning tools under the hood. And so we we try to keep that distinction a little bit clear, that you're not just talking to like Sonnet or Gemini or whatever. You're talking to sort of an amalgam of systems based on what type of task you're executing at any point in time. Although when you're chatting, you are talking to just an LLM. But yeah, so we have a template for the requirements. We have a template for this design doc because there's sections that we think are important to cover. And again like if you disagree and you're like I don't care about the testing strategy section just ask the do it and similarly the task list has is structured because we have sort of UI elements that are built on top of it as well like task management and do we have we'll get there when we do some property based testing but there's some additional UI we'll add for things like optional you can have optional tasks and stuff like that and so we we need the structure there for our taskless LSP to work for example. Yeah, thank you for the question. Anything else before we truck on? Cool. I may need somebody to remind me what we were doing. Oh, that's right. So, we went through and we synthesized the spec for adding memory and some amount of persistence to my agent.
By the way, I didn't introduce you to this project. This project is called Gramps. It is it is an agent that I'm deploying to agent core to learn about it. I mentioned that. But what I didn't tell you is that is it is a dad joke generator. A very expensive one since we're powering it via LLMs, but effectively you're a dad joke generator. Jokes should be clean. They should be based on puns, you know, obviously bon bonus points if they're slightly corny but endearing. yada yada. So we're deploying this to the back end. So, the reason I want memory is because every time I ask the dad joke generator for a joke, it gives me the same damn joke and that's just super boring and my kids are not going to be excited about that. So, I want memory so that as I come back for the same session, I get different jokes over and over again. That's the context on the project. So, we've come through here and we actually said we generated this thing, we did the task list. I said, "Hey, is this the idiomatic way to do it?" But what I know is that we didn't actually we're not using Agent Core's memory feature, which is probably a big oops. And so, you know, quick show hands. Do we want to make the mistake and go all the way to synthesis and