First Look: ChatGPT API for Web Developers

Chat Completion API

Maximiliano Firtman

Maximiliano Firtman

Independent Consultant
First Look: ChatGPT API for Web Developers

Check out a free preview of the full First Look: ChatGPT API for Web Developers course

The "Chat Completion API" Lesson is part of the full, First Look: ChatGPT API for Web Developers course featured in this preview video. Here's what you'd learn in this lesson:

Maximiliano demonstrates the process of making a POST request to the Chat Completion API and explains the contents of the response received from the server. Connecting the chat application to send prompts and receive responses from the API is also covered in this segment.

Preview
Close

Transcript from the "Chat Completion API" Lesson

[00:00:00]
>> So now we'll go to API reference. In API reference, you can, of course, see things that you can do with different APIs. Because we are going to work with ChatGPT, we are looking for Chat, not Completions. Completions is for using other models within OpenAI. So within Chat, you can see how that works.

[00:00:26]
So basically, to talk to the API, we need to make a post request to that endpoint. And within the headers, we have a header for authorization, where we are going to send the API key. That's how OpenAI will know who we are so it can charge the tokens to us.

[00:00:50]
So here we can see the example in curl. So we can copy and paste this into the console, we just need to paste the OpenAI key. But also, you can click here and select python or node.js. And you can see the code ready for copy and paste on Python or node.js.

[00:01:10]
Why not JavaScript client-side? Because as I mentioned before, it's a bad idea, but anyway, we will try it right now. So if we try that version, let me paste that directly here in a text format. I'm just going to copy and paste the OPENAI_KEY key and paste it here where it says, the KEY.

[00:01:37]
So I want you to see this part, it's a JSON that we are sending through post to OpenAI's server. It has some properties, for example, we are selecting which model that we want. 3.5, that's the cheapest ChatGPT model, it's called gpt-3.5-turbo. That's the name. If you want gpt-4, it's just gpt-4, the expensive one.

[00:02:06]
Let's stay with 3.5-turbo because we are not going to make too complex queries. And then look at this, we need to send messages because we are talking to chat. messages is always an array, and we need to pass objects with role and content, role content, role content. For example, remember that within the playground, we have a system message, where we were explaining who you are, or who the system is.

[00:02:34]
If we want that, it's something that we pass in the array. It's role system, That's the name, and we pass a content. And we say you are, I don't need to close this here, like there. You are a computer answering questions. It's not mandatory, right, so it's optional.

[00:03:04]
But this is just giving more context to the model to make a better answer. So this is role user, so the user is saying, Hi, do you know what to do in Minneapolis on Saturday, Question mark? That's the prompt, okay? So if we copy and paste this into a terminal, And we got the answer.

[00:03:43]
The answer says, that is the JSON here. In fact, let's try to get this JSON answer into VS Code so we can look it better, we can format it. So this is the answer that we receive from the OpenAI server. It says the model that is using is gpt-3.5-turbo version 0301.

[00:04:09]
How many tokens we use for the prompt, and how many tokens we use for completion? In total, 130 tokens, it was US $2 per million. Wow, do the math. It is still pretty cheap, okay, in that interaction. And then we receive an array of choices, where you have a message saying, the assistant is the model.

[00:04:34]
And so we Hello, there are many things you can do in Minneapolis on Saturday, here are the few recommendations. One, visit the iconic sculpture garden at the blah, blah, blah, blah, blah, blah, blah, blah. So that's the basics of how we deal with this. So now we need to just mix this into our app.js folder for the chat, because remember, so here, it's here, we were trying to do this.

[00:05:05]
So I wanna say, Hello, send, and I wanna see that in the UI. So that's what I'm going to do right now. So to do that, let's open, we in the chat folder, the app.js file. And there is a function, sendChat. That is already reading the prompt from the user in a variable, and there is a TODO here.

[00:05:30]
So we actually need to make a query. Remember, let's say WARNING, we are doing this client-side with a big security risk, just for fun. In fact, this is pretty good if you wanna do an interface locally in your computer that is playing with GPT, but don't publish this with the key, okay?

[00:05:53]
We will do it better on the next example. So we need to make a fetch. So we need to create a response object that will await for a fetch API. The endpoint that we want, by the way, I do have it here as well, the fetch function, but I'm going to do it from a scratch.

[00:06:16]
The endpoint is this one, that's the URL. Okay, make sense? And then we need to pass some arguments. In fact, I think that they were already here. I guess just just for the sake of doing this faster. Don't wanna waste too much time, too much of your time.

[00:06:35]
So that's the endpoint, and then I need to pass an object where we say, the method is POST. So I want to send the POST, and though this is the header and the body. We don't have that data yet, we need to create it, so we will do it in a minute.

[00:06:52]
So for the headers, if we look at this version, it's not just the content type. We also need to pass Authorization: Bearer and the key. So I need to also add that, because if not, it's not gonna work. So it's Authorization, that's the name of the property. And we can use backtick here and use the OPENAI_KEY there with all the warnings that I gave you about doing this on frontend.

[00:07:33]
Okay, make sense? So what about the data? I need to create that data object. So I need to create the data object that actually matches what we saw here. It's not so complicated, it's just that object, so I can copy and paste that object. It's the model and an array of messages.

[00:07:52]
I can pass the system message or not, it's one system message per query, right? And instead of, Hi, do you know what to do in Minneapolis, I need to take what is coming as the prompt that the user typed in our form control. Okay, like so. So we have the prompt, and that's the array.

[00:08:17]
And then we await for the response. And because it's a JSON response, a JSON response needs to be parsed as JSON. So we are going to say, response.json, and we are going to await for that response. Let's call that json, and let's do a console.log to see if that works.

[00:08:38]
Does it make sense? So we're just making the same request, but from here. So I'm going to open the console to have that on hand and say, Hi, send, and we received the answer. So the object is there, now we need to dig into these objects and get into choices sub 0 dot message dot content.

[00:09:09]
That's the actual message. So we actually need to get that, so it was great. The message from the, let's say, computer from GPT is the object. So json.choices sub 0. Let's go again here, choices sub 0 dot message dot content. And now, what I need to do is to attach that into my HTML.

[00:09:45]
So we can go quickly to HTML and see that we have a ul, so it's an italic. So we're going to add list items pretty quickly to that ul. So something that I can do is query quickly the vanilla.js. We will take the only ul that we have in the page, we can add an ID or a class name.

[00:10:04]
And we can say that we are going to add a new li with that message. Okay, so we send the message like this, and maybe something that we can do is also add our own message before in prompt. And because it's our own message, let's put this into a bold HTML so we can make a separation.

[00:10:32]
So now, here I will say, Hi, and if you look there, there are some nice emojis through CSS there, hi, how can I assist you today? Let me show you also, is it working the way how we judge GPT clone? We'll see. So what if I ask a question such as, do you know something about frontend masters?

[00:11:02]
Let's wait, Frontend Masters is an online learning platform, blah, blah, blah, blah, blah, blah.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now