Lesson Description
The "Future Proofing Prompts" Lesson is part of the full, Practical Prompt Engineering course featured in this preview video. Here's what you'd learn in this lesson:
Sabrina shares some advice for future-proofing prompts as models evolve. Documenting how prompts are used and the models where they are successful makes it easy to test them as new models are released. Also, recognizing that smaller models may work better with different prompting techniques than larger models helps identify how prompts should be adjusted for other models.
Transcript from the "Future Proofing Prompts" Lesson
[00:00:00]
>> Sabrina Goldfarb: So I want to quickly talk about future-proofing ourselves, right? We made this super cool Prompt Library, and whether you like this version better or the version you have better, or the version from this morning better, we made a super cool Prompt Library, but we didn't make it in vain, right? We made it for a reason, and we made it as a way that we can version our prompts, right? We can track our prompts, we can version ourselves, we can test prompts on different models.
[00:00:30]
We can try these prompts when new models come out, and we can keep tabs and keep notes on all of it. There are tons of ways that you can continue to test whether the prompting techniques from today work well, right, in the future. You can create an entire testing suite, you can utilize this prompting library. I would highly suggest, because it is really hard to say what comes in the future for these LLMs.
[00:01:01]
In theory, they could stop growing, right? Maybe the resources run out and they stop growing. Maybe there's a new architecture model that we find that's better. Maybe we reach some version of artificial general or super intelligence in some other way, right? We don't know what's in the future for these models. We don't know if they're going to continue to get better, if they're going to stagnate, if they're going to maybe see that smaller models are better for certain reasons and roll back, right?
[00:01:31]
It's hard to say what that breakpoint is and what the boundaries of this transformer architecture can actually do. So, what I would really encourage you to do is note in your notes, whether it's in here or personal notes, all of the prompts and the techniques that you use, right? What do I use on a daily basis? How often do I use it? Why did I use it? Was it because a prompt I was using before didn't work?
[00:01:58]
Was it because I was just curious to see if "let's think step-by-step" was better? Keep track of everything, what model you used, when you used it, why you used it, how you used it. Keep track of how many shots you input, right? Keep track of your potential token usage. And this way, as new models come out, you'll have this very good amount of data that has no biases on it, right? I might personally love Anthropic and Claude, but if I keep track of my best and worst, make sure you keep track of your worst prompts as well.
[00:02:36]
If I keep track of my best and my worst prompts within my Prompt Library, as well as what techniques I utilize during them, when a new model comes out, I can test every single one of those, and I could say chain-of-thought worked better on the newest model. Maybe my few-shot prompting was not as helpful as it used to be. Maybe it was a lot more helpful than it used to be. Maybe my zero-shot prompt was way better because the model is way smarter, and so now I don't have to spend all that time few-shot prompting anymore.
[00:03:07]
This is all an art and a science balance together, right? Like OpenAI said to us, this is art and this is science, and now we have to take the science of a lot of things that we're talking about today and blend it with that art by making sure that we version ourselves. And the same way that on GitHub and a repo, right, you keep your version control, keep your version control within your prompting library.
[00:03:31]
If you iterate on a prompt, maybe you have really bad luck with a prompt you use, and you iterate on it to try a different technique. Keep track of that. And then when a new model comes out, try both of those. Did it work better? Did they work the same? Did you have the same struggles with the first prompt? This is really going to take us into the future and say, what do we need? And have just unbiased data in terms of what we need for those prompting techniques and for the prompts themselves.
[00:04:04]
Now, I love keeping artifacts of everything, so that's just me that I'm going to say I'm going to take the worst ones, the best ones, and some of the medium ones. You can focus on whatever you want, but at some point, old models will get deprecated, right? And new models are going to replace those models. So you can utilize this library and your notes within this library to really keep track of, okay, a new model came out and the model that I'm currently using in my AI application is being deprecated.
[00:04:35]
I've seen that this next model is really good when I use chain-of-thought reasoning, but not so good when I use one-shot prompting, right? So now I can say when I move to this new model, I want to change the type of prompting that I'm doing in my AI application to get the best and most from that specific model. And like I said, remember lastly that these models fail very differently than normal production code.
[00:05:05]
I just want to really make sure that sets in, right? Models aren't just going to error out, and sometimes they will, right? Maybe I'll get a too many requests or I ran out of tokens or whatever. I timed out. But most of the time you'll get this improper response in terms of hallucinations or in terms of just worsening output, right? So we have to be really creative when it comes to saving our responses, our outputs, our prompts, and being able to measure those things.
[00:05:39]
So just keep that in mind and that's why we built a Prompt Library today, right? So this way we can try and save our prompts, we can try them out again on new models and we can really try and future-proof ourselves.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops