Lesson Description
The "Zero-Shot Prompt" Lesson is part of the full, Practical Prompt Engineering course featured in this preview video. Here's what you'd learn in this lesson:
Sabrina introduces the zero-shot prompt. These prompts provide direct task requests without any examples. They work well for common tasks but rely entirely on the model's pre-training knowledge. A zero-shot prompt is used to recreate the Prompt Library application.
Transcript from the "Zero-Shot Prompt" Lesson
[00:00:00]
>> Sabrina Goldfarb: Now we have to talk about a different kind of prompt, a better prompt, right? A prompt that isn't super different, but is going to get us much better results. And the one last thing I will mention before we do that is that I started a new chat, right? We all noticed that I started a new chat. And I didn't have to do that, and you don't have to do that. But for myself, whenever I see something is going pretty significantly wrong, that's when I decide to start over.
[00:00:37]
A lot of people ask, when do I start over on a new chat, right? When the chat is really long, even if it's a really good one, summarize it and move to a new one. Maybe everything has gone kind of wonky, maybe you're in one of those bug loops of I started to create something, and I was like, no, this is a bug, and then I was like, I fixed it, and you're like, no, it's still a bug, and I fixed it. Now it's still a bug.
[00:01:01]
Start a new chat over. Don't be afraid to lose that context that you gave it. Worst case, ask the model to provide you context for the new chat. Be like, OK, hey model, I'm about to start a new chat. Can you please summarize what we've been talking about? Summarize the tone, if you really liked the tone of the chat, summarize with examples if you want to kind of bring all that in, and then move over to a new chat.
[00:01:30]
Don't let chats get really long and degraded. And if you have a chat that goes totally awry like ours just did, where it was like adding all these features we hadn't asked for, you might as well just start a new chat. So that's my little two cents on at least starting new chats. OK. The next prompting technique we're going to talk about isn't going to look too different actually, but it is actually pretty different.
[00:01:57]
This is called a zero-shot prompt, right? So zero-shot prompting is a direct task request without any examples. So all zero-shot prompts are standard prompts, but not all standard prompts are truly zero-shot, right? So we are asking with these types of prompts, we are asking the model to rely entirely on its pre-training knowledge. So these models, I know I didn't talk about how much data that they're trained on, but suffice to say they're trained on literal terabytes worth of data, like massive amounts of terabytes of data, billions of parameters of data at this point, right?
[00:02:38]
So they have a lot of pre-training knowledge, and if you really think about it, there's a lot of code on the internet, right? So a lot of that pre-training knowledge is actually within code as well, which is really convenient for us as developers. Now, it's not always really good code, right? I'm sure that we've all had like Bob that sits next to us who's known for not writing the best code, and hopefully we're like, wow, I hope that GPT-5 wasn't trained on much of Bob's code, but in reality that's all a drop in the bucket, right?
[00:03:08]
It's trained on billions of parameters of data and so we can utilize zero-shot prompting to do simple tasks for us, right? To do common tasks. This works really well for any common tasks that we have. This is also really good if we don't need a very specific format coming out of our prompts. So if I'm just doing something simply, and it doesn't really matter if the prompt gives me a sentence or a paragraph or a chunk of code, this is also great for that.
[00:03:40]
It's really important to know that with zero-shot prompts, the quality varies based on the task complexity and specificity. So the more complex the task, the harder it will be for the model to give you a good response with zero-shot prompting. We're not providing any examples of what we want. But the more specific that we can be with our zero-shot prompt, the better our output will be, right? Same thing as if I'm pair programming with any of you, the more specific I am about what I need from you, the better the response is going to be.
[00:04:18]
I'd say that a good chunk of my conversations with any of these models are zero-shot prompts, where I'm like, the model has it within its training data to really understand what I need. And my tasks, I really try to break down to be as specific as possible and as small as possible. This way I have the ability to go through every single line of code that's generated. This is like my version of vibe coding, right?
[00:04:44]
Like I go through every single line of code that's generated and I say yes, no, yes, no, and why, right? And so the more specific and the smaller the task, the better your zero-shot prompts will be. So let's take a quick look at some zero-shot prompts in our Claude chat, right? So these are our non-developer chat questions. These are good questions for maybe we want to make an AI application, and these are kind of prompts that we would give in our AI applications.
[00:05:17]
So, let's say we want to make an AI assistant that helps with movie reviews or something like that, right? We can say classify the customer rating into neutral, negative, or positive. And then I'm going to say the text is the product was OK. And then we're going to ask for the sentiment. So we can see that this is a zero-shot prompt because we didn't provide any examples. We didn't say classify the customer rating into neutral, negative or positive, and then say some text was, I love this product, and the sentiment was positive, right?
[00:06:00]
We didn't have to say that because classifying sentiment shouldn't be that difficult. Of course, there are nuances, right, if the customer is like, I really like this, but I didn't like this, and this was OK, but this part was great, right? It can get a little more complex, but at its most basic, testing what customer sentiment is, is pretty simple. So let's see what Claude does with this. Sentiment neutral.
[00:06:29]
I think that's fair when we're saying the product is OK. The word OK indicates a middle ground opinion, neither particularly satisfied nor dissatisfied. The customer isn't expressing enthusiasm or disappointment, just an average acceptable experience with the product, right? OK, great. Let's start a new chat and do one more zero-shot prompt. Translate this text into Spanish, right? And maybe I want the text to be the bathroom and translated.
[00:07:07]
I'm not giving Claude any examples of other translated words. I'm just assuming that Claude within its training data knows some English to Spanish translation, and he says baño and I know Claude isn't a he, but I'm going to use he pronouns for him just because I like to, I don't know, but, right, so we can see that Claude was able to make this very simple translation for us without us really having to add anything special.
[00:07:36]
So now I want us to go back and restart our Prompt Library application, and we're going to utilize a zero-shot prompt with a whole lot more detail to say hold your horses GPT-5, don't add export buttons, don't add search functionality, don't add buggy saves. All we want is we want save and we want delete, and then we're going to add on later some more complex features when we have more complex prompting techniques that can help us get there.
[00:08:08]
So I will pull up my Copilot once again. We can see again we started totally fresh. Feel free to go into the course material to get this next prompt, but I will start typing it out here. Create a Prompt Library, and we can see this is going to start actually very similarly to our last prompt application in HTML, CSS, and JavaScript. And so far identical, right? Because we know that we haven't really added much.
[00:08:39]
We're just saying now we understand that we're zero-shot prompting. We're giving some good direction based on what the model's knowledge is. So I'm going to go ahead and I'm going to copy the rest of this prompt, and then I will read it to you and we'll talk about why it's going to work a little better, and hopefully you'll see for yourself as well. Create an HTML page with a form containing fields for the prompt title and content.
[00:09:09]
Add a save button. Add a save prompt button that saves to local storage. Display saved prompts in cards. Each prompt card should show the title, a content preview of a few words, and a delete button. Deleting should remove a prompt from local storage and update the display. Style it with CSS to look clean and modern with a developer theme in light mode. Include all HTML structure, CSS styling, and JavaScript functionality in their own files, just for my personal preference.
[00:09:40]
If you are copying and pasting directly into like a Claude chat or a ChatGPT, I suggest putting it in one file so you can just copy and paste one file instead. Personal preference, no big deal. But that can be run immediately, includes no other features, right? Our zero-shot prompt, we're kind of pinpointing. What do we want, right? We're being more specific. No other features. Again, I added I want to be able to use Live Server to press go live and see the application.
[00:10:13]
Do not add other dependencies. Do not add any other features than I had listed. OK, let's run it. And let's see if GPT-5 can handle this newer prompt. We can see this prompt in theory is no different than our standard prompt, other than having just more specificity and less complexity to it, right? More specificity, this is what we want, this is what we don't want, less complexity by saying we only want these two features.
[00:10:44]
We don't want anything else. Just focus on this. So now we can see that GPT-5 through Copilot is talking to us, creating the required HTML, CSS and JavaScript files for the Prompt Library with the specified features. OK, we're doing the modern light mode. And we're going to give it just a minute to kind of think through all of this, and see if hopefully we got something a little bit more consistent and better from this more specific prompt.
[00:11:16]
In the meantime, I will give a quick note on model selection and how I personally perceive model selection. Yeah, go ahead. Simon online is asking, I'm not sure I understand the difference between a standard and a zero-shot prompt. Can you provide a standard prompt that would not be a zero-shot prompt? It's a very nuanced standard versus zero-shot prompt. So standard prompt is going to be a very basic one question, one line, kind of very indirect ask of a task, right?
[00:11:52]
Versus our zero-shot prompt is a very specific task request, right? We can see in this task request that we have very specific things that we want this prompt to do. It's very specific. It's very broken down into add this, don't add this. The only thing that we don't have in the zero-shot prompt are specific examples of how we're going to do this, right? So it's still going to be a much more verbose prompt usually, but it's not going to be that typical standard prompt of just what color is the sky, why is thunder scary, can you build me an application?
[00:12:28]
This is going to have that kind of additional complexity of us as developers understanding what do I need to build in this feature, and let me break it down step-by-step for the model. It's very nuanced. Standard prompts and zero-shot prompts are extremely similar. I promise everything else in this has much more of a difference to it that makes it a little bit easier, but I wanted to make sure that we at least covered both of them somewhat.
[00:12:58]
But zero-shot, again, just having that specificity that we kind of talked about. OK, great, and it looks like our Copilot has finished, so let's keep all of this, and let's go live again. And let's see what happened. OK, so if I click my go live, it looks like I'm still live, actually. So, localhost 5500. OK, we can see, and hopefully yours is looking a lot better than when you used your standard prompt as well, that now we have a much simpler UI.
[00:13:30]
We don't have search functionality, we don't have any export buttons, we've got nothing but a title and some content, and some saved prompts. And let's just test it out to make sure it works, because these things can still be buggy, right? We didn't write the code, so we can't just trust that the code is perfect. So I'm going to say zero-shot prompt, and then I will just enter my prompt here, copy it over, and press save.
[00:14:07]
Amazing, right? So, zero-shot prompt, we see like we asked for, just a small amount of the prompt to show, we have this delete button. If I click it, it deletes. I'll add this again. And I'll enter this again and save it, but just to make sure that it's deleted, right? Again, yours might look incredibly different than mine, and that's totally OK. And that's why I'm going to be pushing the code at the end of each section that we take a break for, right?
[00:14:34]
Because I want to make sure that if you want what I've pushed, or if your prompts are going a little haywire, then you're able to still get this Prompt Library. One thing I will say when it comes to these prompts or any prompts, is if you find that there's something small that you don't like, but most of it is fine, instead of deleting and starting over and deleting and starting over, which is great, right?
[00:14:58]
Iterating is great. We do that all the time as developers. You can also just add a standard prompt on here. So maybe you were utilizing Claude 3.5 and Claude 3.5 decided to add an export functionality, but everything else was like perfect and you really liked it. You don't have to start over again, you can just say, take out the export functionality, right? We can go back at any time to our standard prompt and just utilize that to tweak things as we need to.
[00:15:27]
Mine doesn't have export functionality, so I'm not going to send that. But if something has gone slightly wrong on your LLM side, maybe your assistant decided that it really loves export functionality, which happened to me a couple of times when I was prepping and practicing. So feel free to just utilize a standard prompt to say, get rid of this, add this, this is a bug, fix it.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops