Lesson Description
The "Prompt with Structure Output" Lesson is part of the full, Practical Prompt Engineering course featured in this preview video. Here's what you'd learn in this lesson:
Sabrina prompts the agent with the structured output needed to implement a metadata tracking system for the Prompt Library application. Once the plan is reviewed the agent generates the metadata feature and the applicaiton is tested.
Transcript from the "Prompt with Structure Output" Lesson
[00:00:00]
>> Sabrina Goldfarb: Now that we have seen that structured output can be great, we're going to use a structured output prompt to create a metadata tracking system for our Prompt Library, and it's going to track our model used, and we're going to add a little token estimator that's going to have like a confidence score within it. So, if we go over to our code, and I hope everyone has started a fresh chat, just to make it nice and fresh, stop getting our contexts lost in the middle of it.
[00:00:31]
Then I will start typing out our new prompt. Create a metadata tracking system for a prompt journal web application that is attached to our prompts in our Prompt Library. So, I mentioned that I want this specifically attached to our prompts in our Prompt Library, right? This is going to be metadata for each prompt. It's not like its own separate thing. So now we're going to give it some function specifications, and I'm going to copy these over one at a time to kind of go over them.
[00:01:14]
So first, function specifications one, track model. So it's going to take in a model name which is a string, and it's going to take in some content, which is a string, right, and a metadata object. Accept any non-empty string for the model name, auto generate, create a timestamp, and estimate tokens from the content. OK, I'm going to keep adding more. I also want a function to update the timestamps, right?
[00:01:44]
So I want to be able to update the updated at field and validate that updated at happened after created at, OK? Next, I want to have an estimate tokens function, right? So, we talked about functions to words where our base calculation is going to be 0.75 times the word count, right? And then it gives a max. And then it's checking if it's code because code has a slightly different tokenizer to regular words, so we're going to multiply both by 1.3.
[00:02:22]
We're going to give a confidence score if it's less than 1000 tokens, medium if it's 1000 to 5000, and low if it's greater than 5000 tokens because stuff might get a little more hairy, OK? And now we're going to add a couple of other things. All right, so we added validation rules. All dates must be valid ISO 8601 strings. Model name must be a non-empty string, max 100 characters, and we wanted to leave the model name flexible, right?
[00:03:01]
Because models change a lot. Like, a couple of days ago, Sonnet 4 is now Sonnet 4.5 in chat, right? If I go to my Claude right here, we can see I'm using Sonnet 4.5. Sonnet 4 was here the other day, so we want to leave a little bit of flexibility for when new models come out that we can just say, here's the new model that we used. OK, and then throw errors for invalid inputs with descriptive messages.
[00:03:30]
We're going to have an output schema. We're going to have a visual display, create an HTML CSS component that adds and displays metadata in the prompt card with the model name, the timestamps, the token estimate with color-coded confidence for green, yellow, and red, sorted by created at descending. Constraints, pure JavaScript only, no external libraries, must work in a browser environment and include try catch error handling.
[00:04:03]
So let's take a look and see how GPT-4o handles this. I'll take it. OK, we're reading the existing project files to integrate the metadata tracking system without overriding current content. I like it. Reading the styles to extend with the metadata display styles, so adding confidence colors and a layout, reading the script to integrate the new metadata logic and UI rendering. And we can kind of see that as our prompting techniques get a little bit more sophisticated, we've been asking for harder and harder things.
[00:04:37]
And I know we're not spending today going through all of the code that's generated, right? Because it's generating quite a bit of code, especially when we look at like the styles. We've got quite a bit of styling. We've got a bunch of JavaScript stuff that we could really pick into and go into, but as we could see those earlier prompts that we gave had much less control than our later prompts have been having, right?
[00:05:02]
We're really fine tuning as we go and checking things as we go and we can combine these prompting techniques to do that even further. So now not only could we say, OK, well, we wanted structured output, but we could have also given it a few shots to give it exactly how we want that structured output to look, right? And then we can just keep kind of building on these techniques to make the most advanced technique we need.
[00:05:27]
OK, so we're going to skip this again. It's just trying to run a quick smoke test by reloading the page and we're just letting it kind of roll. OK. Output schema, each stored prompt now includes ID title, content, metadata with model and created at and updated at, and a token estimate, right, with min, max and confidence. OK, let me know if you'd like tests refactoring into modules, or import/export functionality next.
[00:05:57]
It really wants to add that import/export functionality, which we will be adding, by the way, but just not, just on our own terms, we will be adding it. OK, so I'm going to keep all this and just see what has happened, right? Let's take a look at our Prompt Library and see what has happened. OK, cool. So we can see that this added, but we had unknown model. I'm going to just delete this prompt just that we don't have any prompts that have unknown model in here, but we can see that now we have a new section for model, right?
[00:06:32]
So we can say add prompt, maybe I'll put structured output prompt. And I'm going to grab one of the shorter structured output prompts that we had. And we're going to put it in here. OK. So now we have a structured output prompt, and we're going to say that we put this one in Claude 4.5 Sonnet, and we're going to save the prompt. So now if we take a look at our Prompt Library, we have added so much to it, and it might not look huge, but metadata tracking is really important and can be really complex.
[00:07:11]
So now we have our structured output prompt. We said what the prompt was, convert this error into JSON, return only, and then it gets cut off because remember we said we only wanted a few of the words, but we can go back into it. So now we can say, OK, that was like a three-star prompt. That was a pretty good prompt. Maybe we would want to change something to be more specific. Maybe we want to add a note and say structured output prompts hit it better in Claude than GPT.
[00:07:46]
I'm not saying that was true. I'm just saying we're putting it as an example for our note, right? And now we can also see that we've added this little token estimator, right? So, tokens, it says it's high confidence, because remember we saw it said high confidence, as long as we had under 1000 tokens being estimated. So, it estimated somewhere between 17 and 38 tokens, and we're tracking the model now.
[00:08:12]
So this will be really helpful for us, right, if we continue to utilize this prompt library, because now we can say, which model did we use it on, what was the date that we used it, and any notes, right? Did we use structured output? Was it helpful to use structured output? Did it not matter? Did I spend way too much time on those few-shot examples that I was prompting where that wasn't useful? I can continue to add all of these notes to kind of make it a better prompting library for myself.
Learn Straight from the Experts Who Shape the Modern Web
- 250+In-depth Courses
- Industry Leading Experts
- 24Learning Paths
- Live Interactive Workshops