AI Engineer
January 12, 2026

OpenAI + @Temporalio : Building Durable, Production Ready Agents - Cornelia Davis, Temporal

How Temporal Provides the "Save Game" for OpenAI Agents

by Cornelia Davis

Date: [Insert Date Here]


This summary is for builders moving beyond fragile AI demos into production-grade systems. It explains how the Temporal and OpenAI integration ensures agents survive process crashes without losing state or re-burning tokens.

  • 💡 How can an agentic loop survive a total system failure?
  • 💡 Why did OpenAI make their SDK runner class abstract for Temporal?
  • 💡 How do micro-agents use context handoffs to solve complex tasks?

Cornelia Davis brings her distributed systems expertise from the early Cloud Foundry days to the agentic frontier. She demonstrates how Temporal acts as a "save game" for OpenAI agents by providing a durable backing service for non-deterministic loops.

The Durability Gap

"When you're on the 1,350th turn to the LLM and your application crashes, no sweat."
  • Durable State: Temporal records every LLM interaction and tool result via event sourcing. You never lose progress or waste budget on redundant API calls when a process dies.
  • Logical Abstraction: Code runs as a persistent entity rather than a fleeting physical process. Developers focus on business logic while the infrastructure handles the mechanics of state recovery.
  • Distributed Reliability: Every activity call is facilitated over internal queues. Your agent becomes a resilient distributed system by default without you managing Kafka or Redis.

The Integration Edge

"We actually have an integration between the two products that Temporal and OpenAI worked on together."
  • Abstract Runners: OpenAI modified their SDK to allow Temporal to inject persistence directly into the runner. This native integration makes the agentic loop itself a durable workflow.
  • Activity Tools: Functions are wrapped as Temporal Activities with built-in retry policies. Every tool execution is tracked and retried automatically during network flickers or downstream outages.

The Micro-Agent Architecture

"I love the notion of micro-agents that do one thing and one thing well."
  • Context Handoffs: Orchestration happens by switching the persona of a single loop rather than starting new processes. This keeps token usage efficient while allowing specialized agents to handle niche tasks.
  • Elastic Scaling: Workers pull tasks from queues instead of running in a monolithic block. You can increase agent capacity instantly by spinning up more worker threads across your cluster.

Actionable Takeaways

  • 🌐 The Macro Migration: The industry is moving from "Agent as a Script" to "Agent as a Durable Service" where state management is handled by the infrastructure.
  • The Tactical Edge: Wrap your existing API tools in the `activity_as_tool` function to gain automatic retries and execution history.
  • 🎯 The Bottom Line: Reliability is the only moat in the agentic economy. If your agent cannot survive a server restart during a three-day task, it is not ready for the enterprise.

Podcast Link: Click here to listen

I'll introduce myself in just a moment, but I'd like to get to know a little bit about you. So, you can see that there's two brands up on the screen here. There's OpenAI agents SDK in particular, and there's Temporal. I work for Temporal. I'll tell you more about myself in just a second.

I'm curious, how many folks are using the OpenAI agents SDK today? Okay, about a quarter of you. Any other agentic frameworks? Okay, about the same set of you. So, it looks like there's quite a number of you who are not using an agent framework just yet. So, I'll teach you a little bit about that.

Okay, next question. How many folks are doing anything with temporal? Not very many. Awesome. I'm gonna get to teach you some stuff.

So, we're going to talk today about both those technologies. I'm going to talk about them each independently, but I'm going to spend a lot of time on them together. Spoiler, we actually have an integration between the two products that Temporal and OpenAI worked on together, and you'll see it's really quite sweet.

So, let me very briefly introduce myself. My name is Cornelia Davis. I'm a developer advocate here at Temporal. I have spent a lot of time, I think the bulk of my career has been spent in this distributed system space. So I was super fortunate to be at Pivotal working on Cloud Foundry from the early 2010s. So I was really there during the kind of movement toward micro service architectures, distributed systems, those types of things.

Any Cloud Foundry folks in the room? Oh, just a few. So, for those of you who don't know Cloud Foundry, Cloud Foundry was the early container technology out on the market. It was incubated as a open- source project at VMware and it used container images, Linux containers, container orchestration, eventual consistency, all of that stuff before Docker even existed and well before Kubernetes existed.

So I was very fortunate that I was there at the beginning of that movement over toward platforms that supported this more agile distributed systems way of doing things and because I spent so much time in the microservices world I also wrote this book.

Okay. So what we're going to talk about today is we're going to talk about the open agent open AI agents SDK. Then I'm going to give you a temporal overview. I'm going to do lots of demos and I'm going to show you the repos. If you want to follow along, you can go ahead and grab the repos.

Both of my demos I actually changed this morning so they're sitting in branches instead of in the main branches, but I will make that very clear as well. Going to do lots of demos there and then I'm going to move over to the combination of the OpenAI agents SDK and temporal together and we'll do more demos there as well. And then I'm going to talk a little bit about orchestrating agents kind of in the general sense.

So this here is a notebook that we're I'm not going to use today. And so I decided I just ran this workshop earlier this week and I decided that for the AIE crowd it was way too basic. That said, if you're interested, you can go there. It will take you through. It's set up with Jupyter notebooks. You can run it in code spaces on GitHub and you can run your first OpenAI agents SDK agent.

Then you can run your first 101 temporal not agent but temporal application. Then you can move all the way through the agenda that way. But it's pretty basic and I decided that for this crowd I wanted to do something more advanced. So we're not going to use that today and I just crafted some of these demos this morning.

Okay. So without further ado, this is going to be the shortest part of the presentation is I'm going to give you an intro to the OpenAI agents SDK. This was launched in I think around the May time frame or so. And I'm not going to read you these slides and oh just so you know where we're going. I am going to use some slides because I'm one of those people where I think the pictures really help. I've got lots of diagrams in here, but we are going to spend a lot of time stepping through the code as well.

I don't think I need to define what an agent is. I will tell you that that for me personally the distinction that I make between Genai applications and then when they get to agents is when we give the LLM's agency when the LLMs are the ones that are deciding on the flow of the application. That to me is what an agent is. And these frameworks like the OpenAI agents SDK are designed to make it easier for you to get started with those. And in fact, we'll see that really we'll see a contrast on that with the two major demos that I'm going to show you today.

It's available in both Python and Typescript. And here is the most basic application. So what you see here is that we've defined an agent. We've given it a name and we've given it instructions and it's taken defaults for the rest of it. Other things things that are defaulted are things like the model itself. I don't know what the default is right now.

And then all you can all you need to do after that is you basically need to run it. And anytime you see that runner.run, what that corresponds to is an agentic loop. And we'll talk about the agentic loop several times throughout the presentation. Every time you see one of those runner.runs, it's its own agentic loop. And when we get to the orchestration stuff later on, you'll see why I make that distinction.

It also as I said this is really simple here but it has a lot of other options that you can put in place into the agent configurations that drive how the agentic loop works. You can have handoffs. We will talk about those. So I'll clarify that later. But you could put guard rails in place. You can you can add tools. And we're going to see both of my examples are heavy duty on LLM agency and it deciding which tools to use. So, I'm going to show you tools.

So, there's a lot more that you can do in here, and I'll show you examples of that as we go along. And really, this is the picture of what I'm talking about is that every one of those runner.runs basically has a loop that is constantly going back to the LLM. And after the LLM call, it decides to do things. And if the LLM, for example, has said, I want you to invoke some tools, it will go ahead and invoke those tools. And then it'll take the output from the tools and route it back to the LLM and keep going. And the LLM gets to decide when it's done following the system instructions and we'll see that.

Okay. So that is the basic agent framework overview and there's lots of agent frameworks out there.

Okay. Since very few of you know temporal I'm going to slow down a little bit here and tell you more about temporal. So, Temporal is an open-source project. It's been around for about five or six years. So, yes, it well predates the Gen AI boom that we're in. It's designed for distributed systems and what are AI applications if not distributed systems? So, it turns out that temporal is beautifully suited for these this category of AI of use cases.

Now, it's used in a lot of nonAI use cases. So, for example, every snap Snapchat goes through Temporal. Every Airbnb booking goes through Temporal. Pizza Hut, Taco Bell orders go through Temporal. There's lots of other ones that I'm not remembering. OpenAI Codeex runs on Temporal. So, now we start moving into the AI use cases. Codeex runs on Temporal. OpenAI's image gen runs on temporal. Those are the two I can tell you about. Those are the two that are publicly known. So, we've got lots of others out there, lovable runs on on temporal. So we're definitely making, you know, inroads, lots lots of use in the AI space.

So I've told you who's using it, but let me tell you what it is. What it is is distributed systems as a backing service. So I think everybody's familiar with the notion of Reddus as a backing service or Kafka as a backing service or a database as a backing service. So I've got my applications that are running and I use these back-end services to serve, you know, to play a part of my application. Temporal is a backing service. What it delivers is distributed systems durability. And I'll make that clearer as we go through the presentation.

What that means is that you as the developer get to program the happy path. You get to program your business logic. And the business logic that we're going to program today are AI agents. So you get to say, you know what, what I want to do is I want to call an LLM. Then I want to take the output from the LLM and I might want to invoke some other APIs and then I want to loop back to the LLM. And you don't have to build the logic in there that says what happens if the LLM is rate limited. What happens if my downstream API is down for a moment? What happens if my application crash crashes? You don't have to program any of that. We do it for you. And I'll show you a few pictures on how this works in just a moment.

So there's a temporal service that is the backing service. And the way that you connect to the backing service is through an SDK. And so the SDK sits alongside your business logic. So you get to program your business logic. And the way that you craft your business logic, you put wrappers around certain functions. And that allows the SDK to say, "Oh, hang on. You're making a downstream API call. I'm going to step in and I'm going to provide you some service. So I'm going to provide you retries. If that downstream service succeeds, I'm going to record that for you. I'm going to record the answer for you so that in in the event that something happens and we need to go through the flow again, I can just get the the result that you called before.

What that means is, for example, if you have used temporal to lend durability to your agents, when you're on the 1,350 second turn to the LLM and your application crashes, no sweat. We have kept track of every single LLM call and return and you will not be reburning those tokens. That's what it means. That's what durability means in this space.

We support um we formally support seven different programming languages, but Apple just a couple of weeks ago released a Swift SDK as well. So there's support in just about any language. There's also experimental stuff out there in Closure, those types of things. I said it's an open- source project. The vast majority of it is MIT licensed. There's a little bit of Apache 2 left over from in the Java SDK. So very very permissive licenses and those of you who don't know the history temporal was a fork of a project that was created out of Uber called Cadence.

Anybody know Cadence? Yeah. Okay. So a few people know Cadence. So Cadence pretty much every application running at Uber runs on Cadence and it's because they can program the happy path and all the durability is just taken care of for you. So that's kind of the overview of what temporal is.

So there's really kind of I'm going to talk about two foundational abstractions. There's a handful of others as well, but the two foundational abstractions that you need to know about as a developer is you need to know about an activity. And an activity is just a chunk of work. This is work that either is going to make external calls. So it's work that might fail. It's like a lot of work that might fail. or if you are doing a lot of work that you don't want to have to redo in the event that something goes wrong, you might want to put that in an activity as well. So, it's things like withdrawing from an account, depositing into an account. We'll get to the AI use cases in just a moment.

So, those are activities you wrap that Oh, and I didn't mention it, but the SDKs are not just thin wrappers that are sitting on top of a REST API. These are SDKs where as you can imagine be let delivering durability across distributed systems means that all of those algorithms that you thought that you had to implement like worry about concurrency and and and um quorum and all of that stuff that's all implemented in temporal and so our SDKs have a lot of that logic is in the SDK. The service is mostly persistence for that. So there's a lot of intelligence in the SDK.

So these activities, if you've said, look, here's my work. Here's a heavyduty piece of work or something that's going external. Let's put an activity decorator on that. Then the SDK says, oh, okay, I'm going to give you some special behavior. Then you orchestrate those activities together into your business logic. And what we call those orchestrations is workflows. Okay?

So, and you'll see that what happens when you put activities and workflows together, that's where the magic really happens. There is some level of magic in the activities and in fact, we're just starting to release what we call standalone activities. So, you'll be able to use activities without workflows and get some of those durability benefits there as well. So, there's all sorts of evolution that's happening.

But the type of magic that I'm talking about when you bring workflows and activities together is that I overlaid um a bunch of icons on here. So I overlaid these little retry icons. So what you do is you specify in your workflow logic you specify the um retry configuration. So you can decide are you going to do exponential backoffs? Are you going to do unlimited retries? Are you going to top out at five retries? Are you going to have a maximum window between retries? You get to configure all of that. And as soon as you do that and you orchestrate these things together, now you get retries. And you'll see the code in just a minute simply by calling these activities. I don't have to implement the retry logic. I don't have to implement any of this other logic. It just happens for me.

So I get retries. I also get these little cues. So what looks to you like a single process application I'm calling this then I'm calling this and I'm calling this every single time you call into an activity every time you come back from an activity to the main workflow all of that is facilitated over cues so that what looks like just a single monolithic application is already turns into a distributed system. So you can go and deploy a whole bunch of instances of those and you can basically scale by just deploying more instances of it. You don't have to manage and I you don't have to manage Kafka cues any of that stuff. It's all built in.

I spoke with somebody this week here at AI Engineers who's a an open source user actually a customer of ours and I asked him you know why did you pick temporal and he said because we tried to build all of this with Kafka cues and we ended up spending all of our time doing operations on Kafka and spending 25% of our time on the business logic when we switched over to temporal we're spending 75% of our time on business logic and they're using temporal cloud. I didn't mention our business model is that we have it that that that service we offer that as SAS. So they're using temporal cloud. So they basically shifted from 2575 to 7525 by moving over here. No longer have to manage Kafka cues or red reddus or anything like that.

And speaking of Reddus, you see in the upper right hand corner you see state management. And so one of the things that we do as well is we keep track of where you are in the e execution of your application. We do that by recording the state. Again, every time you're making calls to an activity and coming back, we record that. It's basically event sourcing. That's what we're doing. It's not only event- driven architectures, but we're doing event sourcing as a service. So you you get to do that. So, we store all of that state so that if something goes wrong, and I'm going to demo that, we're going to see things going wrong, it will pick up where it left off because we will just run run through the event history and pick up where we left off.

So, those little icons that I showed overlaid on the logical diagram, I'll get to your question one second. Actually, all of those services live up here in the service. So, they're all durable. So it's not that they're living in the process, but they're living here in the service.

You have a question? Not sure it's relevant. So a lot of the agents I have, they handle streaming data. So I wanted to see if help with that.

Oh, great question. So yeah, so the question was a lot of the agents that I'm building are doing streaming. Do you do streaming? And the answer right now is a very simple no, we don't. But it is one of the things. and my colleague at the back, Johan, is head of AI engineering. So chat with him, chat with either of us. It is one of the two top priority things that we're working on right now. The other one is large payload storage. If I don't have a chance to to talk about it here during the workshop, come find one of us. We can tell you about that.

You can imagine what large payload storage is. is that you're doing LLMs, you're gonna you're gonna passing big stuff around. Instead of passing it around by value, pass it by reference. That's what large payload storage is. That's Johan.

I'll just mention there are a bunch of people using workarounds. True. Screening in production today at scale. So happy to talk about that, but there's going to be more integrated solution coming.

Yeah. So I'm just going to repeat what Johan said in case you couldn't hear it. So we do have customers that have built streaming support on top of temporal, but what we're doing is building it in natively. So, yep. So, you can do it today. It's just a little bit more work. It's not the happy path.

So, with that, I want to give you a demo. So, this is going to be my first demo that I move over here. Let's see if I can get my screen back. Okay. So, if you want to follow along, the first thing I'm going to do is I'm going to come over here and I am going to let me increase the font size. Um, so I'm going to point you to two repositories. This is actually the second repository, but what I have up on the screen right now is that if you want to get started with temporal, super simple, you don't have to use temporal cloud. You can just run a temporal service. So, the backing service, you can just run it locally on your machine. So you can do it by curling this. You can homebrew install it as well. And then to run that local server, you can just say temporal server start dev. And now you've got a temporal service that's running locally. And all of my applications here are just connecting to my local host. And we we'll see the UI in just a moment.

I'll come back to this repository in just a moment. The repo that I'm going to demo for you is this one. And sorry I don't know how to increase the font size but you can see that the org and the repository here is the org is temporal io. That's also where you'll find all of the open source for temporal. And then we have something called the AI cookbook and that's one of the examples. I I I actually extended the example just this morning, but and you're going to find that in the the branch that we're going to demo here today is called the agentic loop de branch. So if you want to, you know, go back and take a look at this later on yourself, that's what we're going to be looking at.

Okay. So with that, let me get to my right terminal. And so this is where I'm going to run it, but I want to show you the code first. Okay. So am I in the right one? OpenAI. Nope. This is the wrong one. My other cursor. Here we go. So this is the agentic loop. So I'm doing two demos throughout today. And so what you see here on the left hand side, let me make it just one tick bigger is that remember that I talked about activities and I talked about workflows. So that's the first thing I'm going to do is I'm going to show you the activities. Remember we had withdraw and and and uh and deposit, you know, that type of thing. Here, of course, what we're doing is an agentic loop. So my activities are going to be call the OpenAI API, not the agents SDK yet, just the OpenAI a um API and invoke some tools. So these are my two activities.

So let's look at the first one and you'll see how simple it is. I promised you the happy path. It really is that. So here is my call to the OpenAI responses API. Okay, it is exactly what you would expect. I'm passing in the model. I'm passing in some instructions. The user input is going to come in my tools, which I'll show you in just a moment. And then I've got some timeouts that I can configure there. That's for the OpenAI. What I've done is I've wrapped that in a function. It takes in that request. So, all of those parameters came from a request that I'm passing in. And you'll see how I invoke this in just a moment. And here is that annotation. Now the different SDKs have different approaches. TypeScript for example doesn't require a bunch of annotations. It just figures it out. It knows where the activities are. Java has annotations. Those types of things. But this is Python. So you can see here that we just have an activity decorator. So by just having that decorator, so you can see it's not complicated at all. All you need to do as a developer is say, "Here's a bunch of work that I want to do that I want to kind of encapsulate into a step." And you just put an you put it in a function. You put an activity decorator on that.

So, I'll come back to the tool invoker in just a minute because there's something interesting that's going on here. So, now if we go to the workflow, the workflow is also pretty darn straightforward. So what I have here is I have my workflow definition. You can see it's a class. The reason it's a class is because there when you create a workflow you create the application main what I call the application main and that's what has the workflow.run on it. But this workflow also I'm not going to cover these abstractions today but we have a handful of other abstractions like signals. So for a running workflow, you can signal into it and we also have an abstraction called an update. It's a special kind of a signal. And we also have the analog which is queries. So those things are added to this workflow class as just functions that are annotated with signal update or query. So that's why we've got a class for the the the workflow. And if we take a look at what the logic is in here, you can see that I have a while true. So this simple application is just that same picture that I showed you earlier where I said the LLM is we're just looping on the LLM. And if the LLM makes the decision to do so, we're going to call tools. That's the whole application. But you're going to see that I'm doing a couple of interesting things with temporal here.

So in order to invoke the LLM, I execute that activity. So you can see that I'm passing in my model. The instructions here, I I won't show it to you, but you can see it all in the repository. The helpful agent system instruction basically just says you're a helpful agent. If if you if the user says something and you think you should be using a tool, let me know. You know, choose a tool. Otherwise, respond in haikus. You'll see that in just a moment. Like haikus are like the fooar of the AI world, right? Ever we're all we're all going to write agents. It's the hello world of of the agentic space. So we're going to respond in haikus. And that's it. So we're doing this at a while true. And I've got a couple of print statements there. You're you're going to see how this runs in just a moment.

Simplifying assumption here. I'm making the assumption that it's only calling one tool at a time. So I'm grabbing the output of that. And then I just take a look at it and say, is it a function call? And if it if it is a function call, then I'm going to handle that function call. I'll show you that code in just a second. And then I'm going to take the output from that function call and I'm going to add it to the conversation history. So I'm not doing any fancy context engineering here. None of that. I'm just basically tacking onto the end of the conversation history.

Okay. Now, handling the function call is really straightforward as well. So the first thing that I'm doing is I'm adding the response from the LLM. So there's we're going to by the time we're done with this function call, we're going to have added two things to the conversation history. We're going to have added the response from the LLM which says please make a function call and then we're going to do the function call and then we're going to add the result. And I just showed you where we're adding the result of the function call. So here, this is just me adding that to the um and this is some of the squirrel stuff. I'm I have this this application running against the Gemini API as well. And the biggest pain in the butt in all of this stuff is that the formats are different. So I have to rewrite because the JSON formats of conversation history are different between the different models. Yes, I know there's light LLM out there, but I don't like least common denominators. And also I like to understand what those formats look like. So um but you can see here that I'm just doing some ugly parsing and then I'm executing remember I'm handling the tool call here. I'm executing the activity with that tool call. So I've p pulled the tool call out of the response from the um LLM and then I'm going to invoke that activity which is execute activity and the item name is the tool.

Now one of the things that I was really intent on here is that I didn't want to build one aentic loop application that does one set of tools and then have to rebuild a whole another one when I have a different set of tools. the agent itself and we heard I don't remember who talked about this but somebody talked about this on stage this week where they said look the agentic pattern is the is is fairly standard and what we're doing now is we're inserting things into this standardized agentic loop and that's exactly what these um AI frameworks are doing these agent frameworks and I wanted to do that here in the temporal code as well the cool thing is that temporal has something called a dynamic activity The dynamic activity allows you to call an activity by name, but that that activity is dynamically found at runtime. So the activity handler here, and I'm going to show you the code in just a second, is basically going to take in that name and say, oh, okay, I and remember this is event driven. So we have an activity that's waiting for things on a queue. And so you can configure an activity. You can configure one of our workers to say, "Hey, this is a worker that will basically pick up anything off of an activity queue. Doesn't matter what the name is." So you don't have to tightly bind to a specific topic name for example.

Yes. Question. I need in advance map which tools would be available for the agent based on the activity.

Um that is separate and I'm going to show you that module. There's a module here that you see that's called tools. If you see the tools directory, the way that I'm running it here, it is it loads that stuff at the time that I load the application. So, I'm not doing any dynamic loading, but I can swap in and out that tools module and the agentic code does not change at all. So, I'm not going all the way to the point where I've implemented a registry and I'm doing dynamic calling of those things. You can do that, but this simple example has basically just put that all into a separate module. And you'll see how that module can be switched in and out because I'm loading it at at at um at start of runtime. So, simplifying, but yes, you could do that.

Okay. So, I'm just going to call an activity. And so, let's take a look at what that activity looks like. It's this tool invoker. And so you can see here that it has the activity decorator just like I showed you before, but now it says dynamic equals true. So that means that this activity handler will pick up anything that is showing up on a queue that isn't being picked up by some other activity already. So it'll pick up it'll pick up get weather. It'll pick up get random number. It'll pick up whatever shows up in there. You have to register. Um, you do have to register. No, you don't have to register all of those. Those things can be done dynamically. You don't have to register them into the worker. And what you can see here is that we basically grab that. We get the tool name. And then what you can see here is that I'm effectively looking up the function. You can see here there's no tool names in here at all. It's basically looking up the tool name from a dictionary. it and it it's metaphorically a dictionary. I'll show you those those uh um functions in just a second. So I have one function which is a get tools function um which by the way let me go back to that. So in the open AI responses uh no sorry it's down in the workflow when I invoke the LLM right here. Notice that I made this get tools call. I'll show you that get tools call in just a second. It's completely outside of the scope of the workflow and the activities. It's in its own module. I'll show you that um function in just a second.

Okay. So, back to the tool invoker. It's basically now taking the name and then it's doing a get handler. So, somewhere here is a get handler call.

Here's the handler. You just passed it. I just passed it. Sorry. 17. 17. Thank you. I appreciate it. So here's the get handler and I'll show you that function in just a second. So great question on the like how tightly bound are these things. Let me show you where that binding is right now. So I have a tools module here and I have in the init is where I've got those two functions defined. So, I've got the get tools and the get tools are basically just taking the list of functions and I'll show you those functions and those functions what we're passing in here is we are passing in the JSON blobs that are passed to the LLM as the tool descriptions. So, these are the tool descriptions. So, for example, let me show you the get weather one. So, if we go over here to get weather, you can see that the um the JSON blob is right here. And it's interesting because OpenAI um in in the completions API, they had a public API that allowed you to take any function that has dock strings on it and generate the JSON for the completions API for the tools. The responses API has no no such public API. So there's a warning in here that says this API that I'm using which is in this tool helper. I'll show you the tool helper. Where is my tool helper? Uh helpers. Here we go. I guess I could put that in tools. Is that there's a thing in there that says warning. There is currently no public API to generate the JSON blob of tools for the responses API. So I'm using an internal one. There is an open issue on this. So there's just a warning in there that I'm using an internal one. So if we go back there, I've used an internal API to just take my get weather alerts request which is the uh it's a pyantic model that has the the functions in there and has some you know additional metadata around it and it generates the JSON blob. So again, that's what you see when we go into the agent is we that's what you're getting with get tools is you're getting the the the array of JSON blobs for each of the tools. And then as I said the get handler basically has it it's it's basically a dictionary that I've implemented as as a set of ifns. So it's a takes the tool name and then it picks up what the actual function is completely independent. And so this particular example has a set of and I'm going to demo those for you in just a second. It has a set of tools here. Um and you can just switch those things out. You do have to restart your Python pro process at the moment just because of the way that I've implemented it.

Okay. Make basic sense. All right. Let me show you this in action. And so what I've got here is I'm running uh the worker. I'm not spending a lot of time here talking about workers, but you remember that I said that this is all event driven. And so there's something that is picking up work off of event cues and then executing the right workflows and activities based on what it's pulled off of the event cues. The the um the thing in temporal that does that is what we call a worker. So a worker is a process that you run. with that worker, you register the activities and the workflows that that worker is going to be responsible for. So it's going to be looking for things on the queue to pull off of those. That worker itself is multi-threaded. So it is not one worker one process. In general people run it depends you can do worker tuning but in general people run several hundred threads. So you run one worker and it's already a a a you know concurrent multi-threaded architecture.

Okay. So this is some solid stuff. Temporal is just the coolest stuff. It's really truly is distributed systems designed. Okay. So I'm running the worker up here, which is effectively where you're going to see the outputs coming from the activities and the workflows. And I'm going to go ahead and run run a workflow. And so let's say are there any weather alerts in California? That's where I'm from. And I think a lot of you are from and hopefully where I will

Others You May Like