Check out a free preview of the full Build AI-Powered Apps with OpenAI and Node.js course

The "Embeddings & Vectors" Lesson is part of the full, Build AI-Powered Apps with OpenAI and Node.js course featured in this preview video. Here's what you'd learn in this lesson:

Scott introduces word embeddings and vectors and describes how embeddings are collections of vectors that represent the meaning behind words. Embeddings are useful for storing and comparing the meanings of words and are essential for semantic search.

Preview
Close

Transcript from the "Embeddings & Vectors" Lesson

[00:00:00]
>> So before we can get into building the search, there's one or a few other things we need to understand, and that's embeddings and vectors. So let's talk about that. So what is a Word Embedding? You can read this description, but I'm gonna try to explain it to you in a way that I'm feeling it now.

[00:00:16]
I feel different when I'm thinking about things, depending on the time of the day. But basically, an embedding is, let's think about it this way, computers are better at understanding numbers than they are things like words. In fact, they understand numbers very well. They can actually understand everything as a number, if you really think about it.

[00:00:42]
So an embedding is basically a collection of those numbers. And in this case, the numbers are called vectors. So it's a collection of those vectors that represent the meaning behind a word. So we're basically translating words to numbers. And those numbers represent the meaning of those words. I know, it sounds crazy, but that's basically what it is.

[00:01:07]
So why do we do that? Well, because if we can convert the words and the meaning behind those words into numbers, we can better use that on a computer, right? Because computers understands numbers a lot better. And if it's numbers, then now we can do math, do all different types of comparisons around, right now we understand.

[00:01:28]
If I say computer, or if I say Apple, you might think computer, you might think iPhone. We understand that, cuz we have so much context. But how does a computer understand that? Well, if we turn everything into numbers, it can actually do math to understand that now. So that's what an embedding is.

[00:01:51]
And the way that that looks is basically something like this. So, for instance, if I had this text, I can convert that text to an embedding model, which comes out as a multi-dimensional array of numbers or vectors, right? And a vector is a number between 0 and 1, basically, and it's just some decimal.

[00:02:10]
And these vectors are what they call high-dimensional. As in, depending on the embedding model, you might have thousands of dimensions. So I don't know how to visualize that for you, but there's more than two dimensions. It's, think of a dimension is like, I don't know, an array within an array.

[00:02:30]
There's like 1,500 of those, basically, depending on the embedding model. So it's like a multi-dimensional array of numbers that represent the meaning behind the text in which you're converting. And there are AI models that convert that for you. So we're using AI to convert the text to numbers, in which we can then talk to the AI with.

[00:02:51]
This is very important, because without this, the AI would not understand the semantics behind the meaning. And we would not be able to compare them, which is the whole point of doing search, okay? And it's deterministic, so if I embed the same thing, I'll pretty much get back the same vectors every single time, which is great, because you need that.

[00:03:15]
Cool, any questions on that? You don't really need to know how all this works. This is, remember I said the part that I went on, I just went like this is the part, I just went crazy, okay? You just need to know that I can hit this embedding thing and get back some numbers, that's it.

[00:03:29]
How it works, you can read about that for a month. But it is very interesting. Okay, how that relates to semantic search? Well, like I said, if everything is a number now, and it's dimensional, we can put it on a graph. Actually, there we go, I have a visualization here.

[00:03:51]
So okay, think about it like this. We have these different colors, right? Orange is animal, green is athlete, this blue is film, this purple's transportation, this pink is village. And when we convert those words into vectors, and we have embeddings, and we plotted them. This is just a three-dimensional graph, but remember it's over 1,000 dimensions.

[00:04:14]
So, [LAUGH] I don't know how to visually represent that to you. So here's a three-dimensional representation of it to keep it simple. You can see how they plot within that three-dimensional space. And from there, because we have an x, y, and z in this case, we can do math to see how close things are, right?

[00:04:32]
So how close is this to this? How close is this one to this one, right? And the closer they are in that graph, the closer they are in semantics and meaning, right? So that's how we're able to determine whether or not something sounds the same or is in relation, so, That's why it's important for semantics search.

[00:04:55]
Because without that, we would not know the thing that you asked and how it relates to the thing that you're searching on. So that's how this works. So the flow is, basically, take the things that you wanna index, convert them to embeddings and vectors, right? Put them in something called a vector database.

[00:05:15]
So there are specific databases, or a lot of databases have plugins that you can then enhance to make vector databases. That way we can store these arrays of numbers, right? These vectors, these embeddings, we can store them. And then from there, when a query comes in, I want to convert that query also into a vector.

[00:05:35]
And I'm gonna put it in the same space as the things that I want a search on. So in here, all the things I wanna search on, I'll take my query, I'll also throw it in here. Why would I do that? Because I need to, because I need to see what is close to this query, right?

[00:05:48]
So if I take my query, and it lands over here, okay, I need to return the things that are the closest to it, right? Because those are the results that someone's asking for, right, and meaning. So I convert my query into a vector and an embedding. And then I should get back all of the things closest to it, and then I can just show those as results.

[00:06:10]
Like, is this what you meant? I can show the closest things in meaning. And that's basically how it works. So it sounds really complicated, [LAUGH] but luckily, we'll be using some SDKs and tools to help us make that not as complicated. That was a lot, I promise this is the hardest part of today, and you don't even need to understand this.

[00:06:34]
This is just something I feel like it's impressive to know, if you're not an AI, to know how this works. Because this is actually the root, this is what's happening right now with generative AI and LLMs, it all comes down to this. I mean there are like, I don't know, $100 million companies now whose main purpose is to build a database to save this now, that's it.

[00:06:57]
That's all they do, it's just like, we're gonna save vectors. And it's a huge market. So I think this is gonna be so much common in the future. When you go pick your database for your next app, I'm using Postgres, you'll be considering, does this thing support vectors, or not?

[00:07:11]
Does it support co-similarity search, which we'll talk about soon, or cosine similarity search. You're gonna have to think about that because you know that you're gonna be using the AI somewhere in your app. And you need the ability to save these things, so I want you to be aware of them.

[00:07:26]
But you don't really need to know how all this works.
>> They didn't, PlanetScale, just fork MySQL or something?
>> [CROSSTALK]
>> PlanetScale did, yeah. So the story about that is, PlanetScale, they already had internal fork that they use of MySQL anyway. Cuz MySQL was just slow to update, so I already had one internally.

[00:07:51]
So it wasn't hard for them to fork it to add support for vectors, but, yes, they did. Because if you go look at Postgres, which is a better alternative, in my opinion. So MySQL, they have, their competitor, NeonDB, which is basically plan at scale, but for Postgres. Postgres already has a vector plug-in, so therefore NeonDB had a vector plug-in.

[00:08:21]
PlanetScale is like, our competitor is getting a lot of customers that have needs for vector data. And then MySQL wasn't being updated anytime soon. So yeah, they just forked it again, added a vector plug into it. So now you can support these vector embeddings inside of a MySQL database on PlanetScale, which is great for them.

[00:08:39]
So yeah, it is a serious thing, and there is a huge list of, let's see, let me find it right quick, there's a huge list. Here's a list of all the different vector stores. This is way bigger than the last time I used this. These are just databases or stores that support vector embeddings.

[00:09:03]
Last time when I used this there was like five here. And I'm sure this is even a huge list, I'm sure it's bigger than this. But some of the big ones are like Chroma is open source. The biggest one is, I'll know it when I see it. I don't know what it's called, but, yeah, Superbase supports it now because it's got Postgres.

[00:09:23]
Vectara is good, yeah, Weaviate is a big one, but there's the biggest one, I can't think of it right now, but it's on here somewhere. They're raising a bunch of money or whatever. But yeah, you have tons of options. So vector stores, any questions?
>> Just resources on going further with high-dimensional vectors.

[00:09:48]
>> Honestly, the best way that I learned it, there were two resources. ChatGPT, [LAUGH] I just asked the ChatGPT about, teach me this stuff. It actually created a lesson plan on vector data and embeddings, and how it works on a math level, and then on a practical level.

[00:10:08]
I had it do that, and then I also had it, I was like, write me a quiz for it. And it wrote me a quiz on it, and it taught me a lot, so that was one way. And then too, Open AI's blog actually talks about it a lot.

[00:10:24]
So if you go to their blog, and I'll just say blog, and then embedding. They have two or three blog posts in here. And you'll see I got some of the imagery from here. They have some really good imagery here. That talks about that, how it works on that level.

[00:10:43]
It's very good, it's very comprehensive, it doesn't get too deep in the science, but it makes sense on the practical levels. So you can see this is where I got that. It's interactive on here, that's really cool. I didn't even know that, this is nuts, who made this?

[00:10:58]
[LAUGH] What psychopath sat down, was like, I'm gonna make this, and did it? Good for that person. Yeah, so this is a good resource. They have other blog posts on here. YouTube videos are really good. I mean, honestly, there's no limit of resources on this topic cuz of how massive of a topic it is.

[00:11:19]
So I don't think you would have trouble finding anything if you just Google, like, what is a vector store? What is embeddings? You'll find tons of resources, but that's mostly what I did, ChatGPT and then open AI blog. And then just playing around with it over and over and over again to see what's actually going on.

[00:11:38]
A lot of the vector store providers also have really good content out there, since they have to, to promote and educate what vector stores are and things like that. So they're also very good resources.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now