Check out a free preview of the full First Look: ChatGPT API for Web Developers course

The "Prompt Engineering for Developers" Lesson is part of the full, First Look: ChatGPT API for Web Developers course featured in this preview video. Here's what you'd learn in this lesson:

Maximiliano explains prompt engineering, the process of designing and refining prompts to get desired responses from language models. The discussion covers important goals to keep in mind when creating prompts, such as consistent outputs, desired format, preventing abuse, prompt validation, and avoiding injection of unwanted content.

Preview
Close

Transcript from the "Prompt Engineering for Developers" Lesson

[00:00:00]
>> So the next topic where we will finally understand how ChatGPT works is known as prompt engineering. The definition is the process of designing and refining prompts or inputs for language models like GPT to generate desired outputs or responses. So that's the idea. I like to say this is more prompt hacking than engineering.

[00:00:28]
That's why I don't like the term prompt engineering, it's kind of the buzzword of the moment. There are people, experts on AI selling books. And yeah, but it's more hacking than engineering. Also because it's really dependent on the model. The day they change the model, the tips that they're giving you are not useful anymore.

[00:00:48]
It happened a lot with DALI, with Metjourney. In every new version, the prompt engineering ideas were not useful anymore, right? But the idea is that understanding how we can set a better prompt to get a better answer, okay? That's the idea. The more explicit and large the prompt, more accurate the results we can get from GPT.

[00:01:16]
Also, more tokens you're spending, okay, but that's another thing. But the idea is that when GPT has more content, remember that this has to do with statistics. So actually, it try to think about what's the next world. And if it has more context, it can get better results.

[00:01:43]
Remember, that's the idea. We have a prompt and it goes to GPT and then we get the result. And now we mentioned before, ChatGPT plugins are not actually injecting new data to the to the black box, they're injecting the prompt. So we are always injecting the prompt. To send the chat history, we are injecting, in the prompt, the chat history.

[00:02:06]
Everything is about the prompt. We never touch the GPT model, it's a black box. As of today, it's a black box. The only thing that we can touch is the prompt. Even plugins on ChatGPT, they're just changing the prompt. So now, there are people saying, I have a super prompt, there are people selling prompts, I saw that.

[00:02:32]
So I'm not going to give you here super prompts to get your life easier, to make money with ChatGPT. No, I'm going to give you hints like targeting developers, web and app developers, so hints that will help us get better results. We want consistent and deterministic outputs. So if we are asking A and the answer is B, we want the same answer on every call, okay?

[00:02:59]
So that's kind of the deal. Sometimes we need the output in a specific formats for processing. So because we are not going to use ChatGPT for manual process. We want to use ChatGPT to integrate that directly with an algorithm, with a database, to automate things. And if I need the JSON we wanna structure, I need the JSON always on that structure.

[00:03:27]
So ChatGPT cannot create its own properties, or hallucinate about how a JSON looks like. We are paying for the API, so we'll need to reduce abuse as well. And also, we wanna validate that the user-generated content that goes into the prompt is valid. For example, This can be just some bad actor trying to inject the prompt.

[00:03:57]
But let's say you have a translation service, you wanna translate code from Java to JavaScript, whatever. That's your service. So you have a text box where a user will type Java code, and you will use ChatGPT to get that code into JavaScript. What happens if the user is pasting whatever, the bible?

[00:04:19]
It's not code. So have you ever think about that? What will happen if we send the bible? So what will happen with the output? So we need to think about that and how the prompt. We will see that in a minute. And also, we wanna stop prompt injection.

[00:04:38]
So we need to, again, we cannot automate that, because bad actors will always find a way to go over our rules. Well, we need to tell GPT, we need to tell the model what to do in case the input is not following the rules that I want. Does it make sense?

[00:05:02]
Everything has to do with prompt engineering, to working with the prompt. So, well, we need to remember that LLMs can hallucinate, making facts and presenting them in a convincing way. Just have that in mind always, right? To reduce hallucination, there are some basic rules that we can follow when we are writing the prompt.

[00:05:28]
And also, we should use the temperature as 0. Remember, the temperature is a property that we set. For example, in our server, we didn't set that one. So when we got here, We can have the property temperature and we can set it to 0, and that means a deterministic output.

[00:05:50]
If you set 1, it will hallucinate, it will be imaginative in their responses. But we don't want that, so we want temperature 0.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now