Lesson Description
The "Adding Approvals" Lesson is part of the full, AI Agents Fundamentals, v2 course featured in this preview video. Here's what you'd learn in this lesson:
Scott demonstrates adding an approval flow for tool calls, showing how to seek approval, handle rejections, and execute tools only when approved. He emphasizes passing arguments for prompts and managing user rejections gracefully.
Transcript from the "Adding Approvals" Lesson
[00:00:00]
>> Scott Moss:So for this, it's pretty simple. It's not too bad. We just need to keep track of some things. So the approach that I'm doing is I'm just going to seek approval for all tool calls for now. And then eventually, if you want, you can limit that to specific, this is what I had in the notes, specific tools, specific inputs, you'll be able to expand on this if you want to do it. But just for now, we'll just implement it for all tool calls, so I'll say let's reject it because this is synchronous, we can just keep track of state in line because the process is still running.
[00:00:38]
And then what we want to do, instead of executing this tool, we want to make sure that it was approved first so we can say, hey, approved equals await. There is a callbacks thing that I made called onToolApproval. This will show a prompt to a user in which they can approve something so we can wait for that because the user will see something that shows up in the UI and then we will wait for them to answer.
[00:01:06]
When they answer, this promise will resolve. So that will stay up as long as your terminal's up. So we want to pass in the tool name and we want to pass in the args. The args are important because we can't ask a user to approve a tool if it doesn't know what the arguments to that tool are. If I said, hey, do you approve me to write this file? What file and what are you writing? Like, I need to see the arguments, right?
[00:01:32]
So that's important, how to show that gracefully cause what are the arguments like this huge 30,000 file, you know, JSON blob. There's, you gotta get creative. That's what designers are, that's where designers come in. So that and then if not approved, what we can say is reject it. It's true. And then we can break this loop. There's nothing else to do. I mean, technically you could signal to the LLM here is like, hey, I know you told me to call this tool because you want to call this tool, but the user said no, they rejected it, so do with that whatever you will.
[00:02:15]
You can decide if you want to be done or you can ask the user for more follow up. You can tell the user that they should let you do it, you could figure that out. We could do that, but I'm just gonna break. There's no wrong answer here. You can do whatever you want, OK? So if we get to this line, then it was approved and everything was good to go. But then we gotta go down here. After this, and we gotta say, OK, well, outside this for loop, if you were rejected.
[00:02:50]
Then we want to, or I'm sorry, not outside the for, but outside this iteration of the tools. If you were rejected, then we want to break the loop as well. We want to stop looping, right? The reason we do that down there is because if you get to the point where we aren't doing these tool calls, like there are no tool calls to make, but yet there was a rejection from the previous run, we want to catch it.
[00:03:30]
Or the previous turn, I guess, on the previous run. All this is one run. So let's see if we can get that to work. ADI. I'm gonna say, read the package.json, and tell me what dependencies this project has. You can see right here, it's like, OK, cool, I want to read the file. I'm gonna call this read file package.json, and here are the arguments I want to pass to it. I'm gonna pass in the path. Package.json and I could say no, right?
[00:04:18]
And then it breaks, right? It's over. The loop's over cause I did break, right? I could have passed that back to the agent and been like, user said no, you know, and then the agent can figure out what to do, which is probably better. And then let's run this back and I can do the same thing. And this time, I'll say yes. Yes. So then it did read the file. And then it told me what was in it. Cool. And now it thinks that I wanted to do this stuff, like, I didn't say that.
[00:04:59]
I just want you to read the file, chill out. And that's a simple approval flow. The sweet thing about this is that the LLM has no idea about it. It has no concept about approvals. It had no idea that it was suspended and waiting on a user to approve, and it had no idea if the user approved it or not, at least in my implementation. If you told the agent that it got rejected, then sure, it will. But I think telling the agent that the user didn't approve is fine.
[00:05:31]
I think creating a tool called approval, ask for approval is bad. Because you could do that, we could just make a tool called ask for approval. Give it a description that says, use this tool to ask the user for tools that need approval and then somewhere in the system prompt or somewhere we have to say, here are all the tools that need approval. So when they, when you want to ask for this tool, first ask for the approval tool.
[00:05:56]
We could do that. That would be bad because what if it decided not to do that on one of the tools? You're screwed. So then you might say, all right, I'll do this on a, I'll do this inside the tool, right? I'll go into the tool that I want approvals, for instance, let's say web search or not web search, let's do delete file. Inside a delete file, outside the execute function, I can do like some type of approval stuff in here, where you have to await an approval.
[00:06:24]
You could do that too, you just gotta make sure you pass in our in our architecture. You'd have some context that's passed down so you can like show the user, right? You'd be like, oh wait, you know, context, you know, UI ask approval. So we can show the user an approval and then we have to wait for that and then only if it got approved that we continue. If it didn't get approved, we won't run any of this code.
[00:06:47]
You can do that, but then you have to write it in so many places, so I don't want to do that. There's no wrong way to really do that. I just did it on a whole level, but the one, it's not wrong, but the way you definitely should not do it in my opinion, is making an approval tool and expecting the LLM to just figure it out. The first time I made human approvals, that's what I did and I spent 3 months trying to eval that.
[00:07:11]
It's never gonna work. It's just, why am I spending time trying to get you to pick the right time to ask the user for an approval when I can just, you don't need to know about it. I'm gonna do it. It's deterministic, right? So, yeah, way better. There is like a tangent. Kind of thing related to human in the loop, it's very hard to describe. I'm gonna do my best, so. And Claude Code actually does it very well.
[00:07:37]
Sometimes when an agent like Claude Code needs additional context, what it'll do, very similar to like asking for an approval because the agent's not asking for approval, the system is prompting you for an approval. It might sometimes ask you for questions that it needs you to answer, like by default a good model with a good system prompt will do that by itself. You don't need a tool that it needs to reach for to ask the user, but sometimes having a tool that says like ask user or ask for help is super useful because not every model will do that and even if they did, the way that they asked might not be like conducive to getting a good answer.
[00:08:15]
So sometimes it's really cool to have like an ask user for help tool that you can give the agent that says use this tool when you are in a fork in the road and you're not, you're not sure what to response to use from this tool result or you need more context or the user wasn't specific enough or you know, you're just unsure of anything, use this tool before you give up and then you can, you know, give it an input of like an array of questions.
[00:08:43]
Here an array of questions you can ask the user, and then you can show a UI to the user like a form, being like, here's a question. You know, and then you could even say in that array of questions, are there any suggested options that we should show the user? Like, for instance, maybe you got something back from the tool result of web search that has like tons of options and you don't know which one to pick because the user didn't tell you.
[00:09:06]
Like for instance, if I asked you to go to the Nike store and pick me, find me a pair of Ja Morant's in this color and it went to the Nike store and found them on the browser, but I didn't tell it my size, so it wanted to ask me a question of like, here are all the possible sizes that I see, can you pick one? That's super useful to be able to do that versus just expecting the model to just do it itself.
[00:09:26]
If you make a tool, it kind of forces the model to use that tool to be very specific about the question that it's asking and even offer you options and then you can surface that really easily. I think the update that Claude Code came out with like a week ago does just that where it just like ask you questions and you use your keyboard to just pick the options and it's like option 1, option 2, something else, hit enter, and it shows you another question.
[00:00:00]
I was like, that's pretty good. So I thought that was pretty useful. I had that in here, but I was like, nah, it's just too much. It's just like mostly all UI so I was like I don't want to do that.
Learn Straight from the Experts Who Shape the Modern Web
- 250+In-depth Courses
- Industry Leading Experts
- 24Learning Paths
- Live Interactive Workshops