Check out a free preview of the full Build an AI-Powered Fullstack Next.js App, v3 course
The "OpenAI Setup" Lesson is part of the full, Build an AI-Powered Fullstack Next.js App, v3 course featured in this preview video. Here's what you'd learn in this lesson:
Scott walks through how to create an OpenAI API account. First-time users will be given a $5 credit for API usage. A credit card can be added after the credits have expired. An API key is created and added to the applications .env.local file, and the first prompt is sent to the API.
Transcript from the "OpenAI Setup" Lesson
[00:00:00]
>> Next thing we need to do is make an open AI account. So we get API access. You will need a credit card for that. Don't worry the credit card is just there to enable the API, because they charge you as you go. It's charging on demand, like you're not paying for something up front.
[00:00:14]
In this course, if you just wrote a while loop, and left it, and walked away you might pay a dollar. It's not gonna be expensive. So, I mean, you'll get rate limited before you even hit that limit. But anyway, make sure you sign up, have an account. I already have one, so I'm just gonna log in.
[00:00:32]
And if you see this screen where it says, Chat GPT or Dolly or API, click API. That's the one that we want. Chat GPT is its own app. It has its own subscription. Not the same as adding, you actually have to pay for this, like 20 bucks a month or something like that if you want the plus version.
[00:00:49]
And then we just want the API.
>> Max for admin has a little first look at using the chat GPT API.
>> Nice. So I'm in here what I can do is I can just click Personal right here, I can click Manage Account. And then you can just click on billing and basically you just need to add a payment method.
[00:01:13]
So I have my payment method here you need to make sure you have a payment method on file 5 dollars will like take you through the whole month. I mean I can show you my usage right here. Where's my usage? So, like, so far, I've spent 52 cents this month in June, and I'm telling you, I've been going I've been going pretty nuts on AI stuff, so, like, you won't have a problem when it comes to cost.
[00:01:37]
That $5 should be more than enough. So, yeah, add your card there. And then the last thing you wanna do is just click on view API keys and then just make a key. Let's make an API key copy. You can make as many as you want, but just have an API key ready.
[00:01:51]
And we'll be we'll be using that. First thing we wanna do is, if you haven't make that API key, copy it and we need to add it to our environment variable file, so we could just drop it in the E and V local. If you do, drop it in a dot end file instead.
[00:02:06]
Just make sure that is in your git ignore coz by default it's not. I had to add it. And then even after you add it in your git ignore, it might not be ignored. You might have to delete it, make it commit, add it back, and then it'll be ignored.
[00:02:20]
Cuz it's already been checked in. If you've committed with it not being in a git ignore get stuff. So I'm just gonna add it in my ENV local. And you have to give it this name, OPENAI_API_KEY= and then the actual key. So once you have that, Just open AI underscore API underscore key, add your key.
[00:02:48]
Once you have that, you can go into the AITS, and let's just create an example. Let's just get started with this. The first thing we're gonna do is let's make a function called analyze. Analyze. How do you spell analyze? Yeah, okay. I'm like, there we go. So we'll make a function called analyze.
[00:03:12]
We'll export that since we're gonna use it. And to get started, so we don't need Lain chain to interact with Open AI. Open AI has their own SDK. I mean, you can even make a fetch call if you want. It's just an API. But LangChain makes it easier for us to do things.
[00:03:30]
Some cases it makes it harder, but you'll see once we start building more sophisticated cases, like trying to do QA and things like that. The first thing we're gonna do is we want to import from LangChain / LLMs / Open AI and we wanna import open AI. So named import.
[00:03:59]
Link chain works in node it works in the browser it works on the edge it works in any JavaScript environment. We're gonna be using it in node, we won't be using this in the browser. So everything we do in AI will be server side, whether that's like happening in a server side data call on a server component or in an API call on one of our route handlers.
[00:04:22]
Okay, cool. So analyze, basically let's just have this take in like some prompts or something like that for now. And what we can do, we'll make this async. Now let's just see if we can run an inference or I think in open AI terms they call it a completion.
[00:04:39]
So a completion is like, give it a prompt, get back an answer, that's a completion. So first thing to do is need to make a model instance. So we can say model and that's just gonna be a new open AI. Takes a config option you don't have to put anything here but I'm gonna put something here I'm gonna put a temperature of zero I'll talk about that a minute.
[00:05:02]
And I forgot the default mock so Open AI has many models and has like GPT-3, GPT-3.5 turbo, which is what is in chat GPT. If you have access, you have GPT-4 as well. I have access to GPT-4. We will not be using that, that will cost you a lot of money.
[00:05:22]
And it's really slow. So we're not gonna do that, but it's so much better. It is like if we had GTP on this app, you thought the responses we got were cool. GPT 4 is even crazier. So we're gonna say model name, and we're just gonna say GPT-3.5-turbo, which is probably the default I just don't remember.
[00:05:43]
Lane Chain is a very active project. They make changes multiple times a day. So it's quite honestly hard to keep up with all the little things that they change. So I just don't use any of the defaults tend to be explicit about everything I'm doing. Okay, so we have our model.
[00:06:03]
And then to make a completion, all we have to do is just say, so I'll just say result = await model.call. And I can just pass in a prompt. Like this. And then I'm just going to log the result so we can see it. All right, so we have that.
[00:06:28]
Let's try to get this to run. I say the quickest way to get this to run right now, we'll probably just add this in a server side server component data call somewhere. We're definitely gonna do it in the API handler when we build it out fully, but just to test this out, it's quicker just to add it to probably this journal page right here that's doing some server side logic here, we just add it here.
[00:06:55]
This is happening on the server whenever you refresh the page, so we just do it here and see the result of it. So let's do that. I'll bring that in, analyze. And you can type in whatever prompt you want. I'll even await it. So it's gonna take a little longer to load the page, obviously, now that I'm awaiting this, but it's all good.
[00:07:16]
If you haven't used GPT or Chat GPT, you might be asking like what do I put here? What do I say? You can say anything? It really doesn't matter what you put. I can say create me a few components that renders a counting number like it doesn't matter [LAUGH] you put whatever you want here.
[00:07:43]
You can ask it to give you a nutrition plan if you want. It doesn't matter so we'll do that go back to the journal page as you can see, it's taking a little longer, just like I told you it was gonna take, because it's trying to make that call to Open AI, which is not gonna be as quick as hitting our database.
[00:07:59]
Then if you just go look at the output of that, you can see for me, it did. It was like, cool, here's an example of a view component that renders a counting number, and then it just wrote the code for me. So it's working and we got this output back.
[00:08:16]
So I don't know what you put in, but clearly you might see something different than what I see.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops