Check out a free preview of the full Build AI-Powered Apps with OpenAI and Node.js course

The "Wrapping Up" Lesson is part of the full, Build AI-Powered Apps with OpenAI and Node.js course featured in this preview video. Here's what you'd learn in this lesson:

Scott wraps up the course by providing additional recommendations and suggestions to further explore OpenAI APIs. Topics including diffusion models and training models on GPUs are discussed. Students are encouraged to continue building on the knowledge gained in the course and to experiment with different ideas and applications.

Preview
Close

Transcript from the "Wrapping Up" Lesson

[00:00:00]
>> That was the last thing that I wanted to cover today. So before I leave you all, I guess I'll just walk through some additional things that you might wanna look at and consider as you're building things. And just some cool products that I feel comfortable talking about, because I'm not involved with them, that you might like, okay?

[00:00:18]
So, some things that you might want to consider looking into is, I would play around with diffusion models, as well. You've probably heard of DALL-E, you've probably heard of stable diffusion. I highly recommend playing around with those as well. If you have a GPU access at home, or on your computer, I highly recommend trying to get into that.

[00:00:36]
And training some models on some stuff. It's actually incredibly intuitive, and fun, and complicated, and very tedious, but you learn a lot. I learned a lot through AI by having to train a diffusion model on my face. And I actually learned so much that way. All that pain that I had to go through to create something that was terrible, was worth it and I leveled up a lot.

[00:00:59]
And it helped me with like a lot of this ChatGPT stuff as well. So I recommend that. I also recommend looking more into the OpenAI API. They have way more stuff that we did not talk about. We only talked about completions and things like that. But if you go look at their API, there's a lot of stuff in here.

[00:01:22]
They have audio. We talked about embeddings, here's fine-tuning. People want to talk about fine-tuning, here it is. You can do fine-tuning. There's a lot of stuff you can do. They'll give you your own model, basically. You can add files there that you can use for fine-tuning. Obviously, there's DALL-E for images, they have that.

[00:01:42]
And then, here's the list of models. There's just a lot of stuff. The new one is moderation. This is how you can create moderation policies and keep them up to date. And you can really figure out what people are saying. This is great if you're using AI to, I don't know, moderate a kid's platform, moderate a chat app.

[00:02:02]
Something like that, this is perfect for it. So there is a lot of stuff here, that I think it's worth looking at. The other thing I would look into, I guess, in the world of AI this is relatively old, but I would look into things like baby AGI.

[00:02:21]
This is scary, baby AGI, not this one, this one. So, this is one of the first autonomous agents ever created. Here's a diagram of how it works. [LAUGH] If you want to figure it out, but there's basically four agents. So someone sends in a, they have some objective.

[00:02:41]
There's an execution agent that does some stuff with the vector database for memory, and then a contact. It's just, it's a lot. It's a bunch of AI entities talking to each other, along with vector stores for memory, and task lists that get created and prioritized on a loop until the objective is completed.

[00:03:02]
And add function calling into that. And yeah, there you go. That's a that's an AGI agent. It's like one of the first ones. There's a lot more now. But I highly recommend going down that rabbit hole because it's fascinating. Auto GPT is probably the better one, I think, which is heavily based off of baby AGI.

[00:03:21]
There's a website called AGBT now? Okay. Here we go, auto GPT. This one is absolutely insane. I actually use this a lot to find shoes online, because it does a really good job with that. Pretty much any bill that they pay online now, this thing pays it for me now.

[00:03:40]
I don't do it anymore. And some other research-based things, like, go research this market for me. And then I'll come back in two minutes, and it has a really good report, or a good enough report, because I taught it how to go to Google. It can think about it.

[00:03:55]
It can go to my notes and see things that I care about, and it'll make sure to cover those things. And it'll get its work checked by another agent that is tuned on different instructions that I gave it. It's pretty good. It works pretty well. It's kind of scary.

[00:04:07]
[LAUGH] And it cost me like, I don't know, two cents to do it, maybe. So it's pretty cool. So yeah, check that out. And then for some cool products, I talked about cursor a little bit. This is really cool. You can do things like this. I can be like, I don't know, fix this.

[00:04:26]
I'm not even going to give it any context, just fix this. And I don't know, it's going to go line by line and try to fix this. I don't think it's going to fix anything, because I think it works. But it works pretty well when you ask it to do something.

[00:04:40]
What did it not like there?
>> Took away the arrow functions.
>> Really? No, it added arrow functions. I know why. I have in the system here, I prefer arrow functions. So it went and added arrow functions to everything. I specifically said I prefer arrow functions. So it did that, where did I put that at?

[00:05:04]
Not here. Let's see, I forgot where you can look to see. Yeah, here we go. No, I forgot. There's a way where you can write a system prompt for this thing, and I wrote one. Here we go. Always use functional react components as arrow functions. So, it's not a react component, but I guess it saw that I like that.

[00:05:27]
So it did it. You see right now, it's trying to create an index on my entire code base so it can better answer questions. And it can also index any website you give it, which includes documentation. Like here, go index this website, and it'll go learn the whole website.

[00:05:39]
So you can reference it here, in your stuff. So it's like a great way to be like, I need to teach this thing Python right quick, which should already know. Or, I need to teach it this RubyGem or this NPM package, or something got updated. Go index it, come back five minutes, and now you can have it generate code based off that.

[00:05:57]
What you can't do in ChatGPT or Copilot. I don't think you can do that in Copilot either. So, that part's pretty cool.
>> Would the code from this course be able to be run in a server side handler in next?
>> Yes, all the code from this course.

[00:06:12]
Not only can it run on a server side, it can run on the edge. It could run in the browser. All this code can run in the browser, can run on the edge. Because, yeah, we're using in-memory vector store. So, yeah, this works in every single environment that JavaScript works in right now.

[00:06:31]
>> Maybe the hard part of that is keeping your server-side keys safe.
>> Yeah, yeah, hold on, yeah. I'm not advocating to put your keys out there. But yes, if you can hide your keys, then yeah, for sure. But yeah, they've already thought about this, right? So, OpenAI on the Edge, that's a thing.

[00:06:51]
I mean, there's a million packages for this. It just works well. You can even stream from the Edge. You have Vercel AI, which is NPM install AI. That works from the Edge. There's a lot of things that, where you can get this to work pretty much anywhere, yes.

[00:07:14]
>> When would it be better to use something like Zod for structured output, rather than not using it?
>> Okay let's talk about that. So, Zod is a schema library that helps you write schemas. So, okay, I gotta unpack this, 'cause there's a lot. So LangChain uses Zod.

[00:07:34]
Let's talk what happens before a function call. Before a function calling came out, you would use something like Zot to get structured output. But you have to do that with prompt engineering. So that basically means you would create a Zod schema that describes the shape that you want, very similar to this.

[00:07:49]
It's very similar, just in a Zod way. And then LangChain would take that Zod schema, and convert it to a set of instructions inside of a prompt for you. It would try to make a prompt to coerce the AI to always follow this structure. If you go look at the last course that I made, building a full stack of AI, I talked about that exactly, because function calling didn't come out when I built that course.

[00:08:12]
So I had to use a Zod schema, and then I think what happened to me in that course, as you can see, I would run it sometimes and it would come back with structured data, and then sometimes it wouldn't. So then LangChain thought of that, and they're like, all right, we'll try to parse the output to see if it matches the schema that you told us to put in there.

[00:08:29]
If it doesn't, we'll automatically call it again with an enforcer that says, hey, I told you to do this schema, do it this way. And then usually that second time, it'll do it. Basically, if you only use a Zod schema, you're just doing prompt engineering, which means it's nondeterministic.

[00:08:45]
Now that you have function calling, LangChain will just convert your Zod schema into function calling schema. You're still using Zod, but now, instead of doing prompt engineering, they'll just convert it to function calling. But you can also just do function calling, or you can convert your own Zod schema to function calling.

[00:09:02]
The only reason you wouldn't use function calling or pop engineering is if you were using a model that didn't support function calling, which they all don't. So that would be the only reason I could think of. It's just that, the difference in function calling is that it's fine-tuned on the model.

[00:09:15]
It's not prompt engineering. So, there's two camps of how to get back what you want in AI right now. One is prompt engineering. And that seems to be the way that most people are doing it today. But I think most people are in agreement that the future of getting what you want from AI is fine tuning.

[00:09:30]
It's just not there yet. So, we're getting there. It's just not easy yet. So, yes, use a Zod schema if you have to. Otherwise, use function call.
>> Okay.
>> Awesome. Well, this is it. This is what I wanted to cover today. Hopefully there were enough examples here for you all to walk away with some idea of the power, the flexibility, and just the pure amazingness of how to add generative AI technologies to an application.

[00:09:57]
Because again, this is really just scratching the surface. Like I said, even just combining some of these things, imagine if we took the search function, this QA function, and we added it, this function calling, and then we added it to the chat interface. I mean, that alone is an app.

[00:10:12]
[LAUGH] You just built ChatGPT. You have plugins, you have function calling, you have searching, you have QA. It's pretty intense. So you can whip together some pretty impressive things very quickly. So hopefully that gave you an idea of some things you can do. I would say, going forward, maybe pick one of these ideas, or two of these ideas, and expand on them a little further.

[00:10:33]
If you have a notion that you use, or a wiki somewhere, try to build a UI around how to QA that wiki, or something like that. Just get your feet wet just a little more, and maybe push it to production, so you can experience what that feels like in production.

[00:10:49]
Go find yourself a vector database to use, try to hook all that up. And just really get lost in it. I think that's where the learning comes from, is doing all that. And then just use this as a reference. The whole point of this was to just give you a few different examples, like how do I do a chat, or how do I do like a semantic search, or what does the document QA look like?

[00:11:08]
It's just there for an example. But, I really want you to build on top of it, to solidify this learning. So, hopefully everybody liked the course. I definitely had a lot of fun making it. Thanks for coming. [APPLAUSE]

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now