Build AI-Powered Apps with OpenAI and Node.js

AI-Driven Function Calling

Scott Moss

Scott Moss

Superfilter AI
Build AI-Powered Apps with OpenAI and Node.js

Check out a free preview of the full Build AI-Powered Apps with OpenAI and Node.js course

The "AI-Driven Function Calling" Lesson is part of the full, Build AI-Powered Apps with OpenAI and Node.js course featured in this preview video. Here's what you'd learn in this lesson:

Scott explains function calling in AI models like GPT. Function calling allows the AI to call specific user-created functions to interact with the outside world and obtain up-to-date information. Significant challenges and considerations are associated with function calling, such as latency, accuracy, reliability, and security concerns.

Preview
Close

Transcript from the "AI-Driven Function Calling" Lesson

[00:00:00]
>> So far, we've talked about some pretty cool stuff. A lot of things building off of previous things, QA, piggybacking off of semantic, semantic search, piggybacking off of a little bit of the chat stuff that we did earlier. I mean, you could actually combine all these things together.

[00:00:16]
You could combine the semantic search abilities and the Document QA abilities with a chat-ability. So you have a chat interface in which the AI is equipped with the right abilities to, I don't know, talk about different datasets and search on different datasets. The problem is trying to teach the AI the instructions or convince the AI that it can do these things, and then teaching it when to do those things is quite complicated.

[00:00:45]
You'd have to be really clever with your prompt engineering. And up until recently that's what people did. LangChain actually has, you can describe different tools, is what they call them or at least is what they used to call them. Yeah, they have these tools like this. A tool is basically like you teaching the AI that it can do something on another system.

[00:01:12]
They called them tools. And it was basically like some clever prompt engineering, like, hey, AI, here's some things that you can do. Here's when you need to use each thing, and here's what each thing needs from you in order to work. And that was cool, but it was never deterministic, and it never really worked well, but when it worked, it was magical.

[00:01:30]
Since then, GPT has added something called function calling. Function calling, which is like that, but ltrained into the platform. So function calling is really cool. It basically allows you to upgrade GPT or any other models that are trained on something like this to, instead of responding back with a message, GPT will respond back with a request from you.

[00:02:02]
It'll say, hey, I need you. You told me I could run these functions, and these functions do x, y, and z. Based off the query, the question you gave me, or the prompt that you gave me, I need to run this function that you said I can do.

[00:02:15]
And here are the arguments to the function that you said you need. So can you please go run that function or whatever system you have, and then report back to me with the answer? Then I will then run your prompts, right? So it itself won't run the function, is asking you to run a function, but what it does is it chooses the right function to run based off the context that it gives you or that you give it.

[00:02:40]
So kinda complicated, but it's actually quite simple in practice. So we'll talk about that a little bit. But I think you already have some notes on it, so let's go to that. So why do we need this? Well, if you didn't know this, LLMs are trained at a certain date and then from there, they never learn again.

[00:02:57]
You might think that these LLMs, like GPT, are connected to the Internet, but they're not. If I ask GPT right now, what is the score of last night's game, right? What is the score of last night's football game? I don't know what this thing is gonna say. Yeah, I don't have real time information to find out the score last weekend.

[00:03:24]
You can check it just doesn't know, right? It got trained after a certain that date in which it was released, and the training was finished. That's all it knows, it doesn't know anything else. So I might be able to ask it a score about a game in 2005 or who won the 2005 NBA finals, right?

[00:03:47]
I might be able to do that. If it's not hallucinating, I might be able to. Yeah, actually I think that's right. So, like, it knows that because it was trained after that date, so it found that information online somewhere, so it's accurate. So, yeah, what do you, just wait to the next update to come out for the new model to see if you can get like today's score?

[00:04:05]
No, obviously you don't. So you have to like upgrade the model to be able to interact with the outside world. So that's one reason why we need something like function calling, and basically, I talked a little bit about it. It's basically just you telling the AI, here are all the things you can do.

[00:04:25]
Here's what they do. Here's how to know if you need them. And then the AI is like, cool, I'll tell you when I need to run them. So go run them for me. So it's like literally the AI asking you to do some work, which is kinda crazy if you think about it.

[00:04:37]
But it works very, very well. So, obviously, the benefits you get is you get up-to-date responses. So, for instance, we just made that semantic search. We just made that Document QA. Well, if you told the AI that it can do those things, and when it should do those things, whenever it got a prompt that it felt like it couldn't answer, it can look to see like, what are the functions I have access to?

[00:05:00]
Does any one of these functions help me answer this question? Yeah, they told me about a function that I can use if I need to query a YouTube video, and a prompt that it gave me was asking about a YouTube video. So I guess I'll respond back with, hey, can you run that YouTube video function that you said that I can do?

[00:05:16]
And here are the arguments that you said you need. Here they are. And then when you're done processing that function, can you pass it back to me, so I can answer this prompt that you originally asked me? And that's basically what function calling is. It's really powerful how it works.

[00:05:32]
And then if you really think about that, if you put that on a loop to where it takes in that answer, and then it thinks some more and then be like, you know what? I need you to go run this additional function. I need some more information. And then it takes that, and it's like, you know what, can you go to Google and look this up?

[00:05:48]
I need that too. And then it just keeps doing that. And it says like, all right, I got everything I need. Here's your answer. It's really powerful, and that's kinda what an agent is. It kinda does things like that. But there are a lot of challenges, there's a lot of complexity around handling the function calls.

[00:06:06]
Now you go from just having a user and an assistant role in a chat. And one time a system to now, there's a fourth one called function, and to be able to handle that, you'll see it can get pretty crazy. There's obviously a lot of latency around the completion API to OpenAI, but there's also the latency around your system running the function.

[00:06:29]
And before they can answer the prompt, all those function calls have to have been ran before the AI will proceed to answer the prompt. So you might be waiting, I don't know, if you have an agent, and it's like continuously asking you to go run these functions and those functions are taking seconds.

[00:06:45]
I don't know, you're not gonna get an answer back until all that's done, I guess, right? So it's stuff like that. Accuracy, reliability, is always the thing. And then, yeah, security concerns. Okay, what if one of the functions you told it to do is like, hey, use this function if you need to write raw SQL to our database, all right?

[00:07:04]
[LAUGH] Yeah, exactly. And AI is like, yeah, well, someone asked me to update a user. So, hey I'm gonna call the function that you told me to, and here's the SQL query that I need you to run. And then you just take whatever SQL query they give you and just run it against a database, not knowing what it is, all cool.

[00:07:19]
Now someone just like updated a user to have ID of two now, or they updated the password to be this new password, right? So it's like, you gotta be a little cautious about this stuff, because you can't actually build a demo of like, yeah, here's a chatbot where if you ask it to change the password for someone else, it'll do it.

[00:07:37]
And it will do it, it doesn't care, right? And this is where people start to get scared like, it's gonna take over the world. It's like, yes, but someone wrote that, [LAUGH] someone made that reality, right? So, yeah, that's some of the security concerns there. So don't give it access to things that it shouldn't have access to.

[00:07:59]
You have to treat the AI as if it's a user, you wouldn't give a user direct access to your database. You shouldn't give an AI direct access to your database. And as long as you follow that rule of don't trust users, don't trust AI, you'll probably be fine cuz there really isn't much that AI can do that a user wouldn't be able to do.

[00:08:17]
So some of those same protections come in place for the most part. There are some different ones. Any questions on that? It's a little scary, right?
>> It's quite the rabbit hole.
>> It is a deep rabbit hole. And to be honest, I haven't really wrapped my head around all the implications of this yet.

[00:08:42]
Every day I'm like, wait. Yeah, yeah, that's crazy. This one AI talking to this AI. Man, you can have a prompt that has some function calling in it, and one of the functions is to call another AI, right? [LAUGH] And now you have a team of AIs who's working together to accomplish something.

[00:09:04]
It gets nuts. It gets nuts.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now