Check out a free preview of the full Machine Learning in JavaScript with TensorFlow.js course

The "Running Predictions on a Loop" Lesson is part of the full, Machine Learning in JavaScript with TensorFlow.js course featured in this preview video. Here's what you'd learn in this lesson:

Charlie implements the loop function, continuously evaluating the live webcam image and making predictions using the model data. After an initial test, a larger training data set is added to improve the accuracy of the results.

Preview
Close

Transcript from the "Running Predictions on a Loop" Lesson

[00:00:00]
>> So we could do const loop and it's gonna be an async function. And inside it we're going to, the way that they do it and they have like a webcam.update function to update the webcam frame. So that's the way that they do it with this library So you have to call webcam.update every time that we loop.

[00:00:22]
And when the camera is updated, we need to predict what's on the screen, right? So we're also going to await a predict function that we have not written yet, but we will. And in an example here actually, let's leave it like this for now, i want to try something.

[00:00:43]
If it doesn't work, I'll tell you what I want you to try and I will do it, but first, let's leave it like this. So we have a request animation frame, it calls the loop function, it updates the webcam frame and it runs or it calls a predict function that we have not written yet, so let's find it.

[00:01:00]
Okay, so predict function again, that will be an async function, okay, so here in the predict function that will look similar to what we did before. So we know that when we call predict or detective re-transportation, so we can have a prediction is the variable and here we're gonna await.

[00:01:27]
We have our model loaded so it's available and then we in this example it is predict. So predict, but we need to pass it something and because we declared our webcam variable at the top, we at this point, should be ready to have the canvas again to actually give it the image.

[00:01:48]
I don't actually know what's in just the webcam object here, but I think canvas is what we want because we only want the canvas snapshot. Okay, So we have predictions here and maybe at this point, let's log that and see what happens. So we can go a bit further later because I think if I remember well, what's gonna come back is not only the single prediction of what it thinks, either right or left, but it's gonna come back with both labels, but the probability is going to be different.

[00:02:22]
And then we're going to be able to extract the one with the highest probability, so at this point, let me check that there's no error, there is no error. So if you click on start, it should, it did not do anything. Failed to parse model JSON response. So instead of running npm run start in the terminal, run npm watch instead.

[00:02:50]
So I think that I had two different comments here where one of them is using a normal npm parcel index html that we had before. But I had some issues with If either loading like local file that's not served as the route is not that, it's like outside.

[00:03:07]
So let's run npm run watch, that's gonna just run a simple python server with the on local host, 1234. So in the part one, the JS where we are, so below predict I do think that you have to call again, window.requestAnimationFrame loop, because I think what happened here is that it enabled the webcam and it ran a prediction on like the first frame that it saw but then it doesn't keep running.

[00:03:37]
So let's try let's try that yeah. Okay, and now that works better ,and then you have continuously like all the stuff in the browsing. What you see here is that you do get the an array with both your labels, so right and left but what you see here is that the probability is different.

[00:03:56]
So we're gonna have to write a bit more code to only return the highest probability because of course, i want to know if it's 92% sure that I'm tilting my head left, then I don't really necessarily care about getting the right label as well. So I'm just gonna refresh that.

[00:04:14]
So we can pick backup where we console log our predictions and we're just gonna do again that's like the part of code that we're gonna right now is not necessarily like it's not TensorFlow specific. It's like how to get the maximum prediction in an array and there's probably other ways to write this.

[00:04:34]
But what I'm gonna do, we're gonna create a variable called topPredictionIndex and that is going to be the maximum so Math.max and inside we're going to spread our predictions array and we're going to map through it. And only look at the max of the probability property. Okay, so again, maybe there's better ways to do this.

[00:05:10]
But again, because the predictions array comes back as, [INAUDIBLE], you don't have to write the comments that I'm writing here. But because it comes back as an array of objects with class and then for example, write probability 0.2 and then the same object, but with the left property, so it goes back continuously like that.

[00:05:37]
What we're doing here is that we are doing like a map operation to kinda isolate the probability property and we're doing math.max on this one. It should be returning the actually, sorry, I called the index but it's just top prediction should be fine. So it's going to return the maximum probability but at this point, it's not the way that I wrote it, again, could be better, but at this point it's going to return the probability, not the whole object.

[00:06:06]
So then the way I did it is to then create a top prediction index and then loop through the predictions array. I'm gonna find the index of 5 and we want the index of the object in the array that has that probability. So I probably over complicated it, but it works in for now that's enough, so we're gonna isolate probability and we're want to, okay?

[00:06:45]
So here the top prediction index, we're gonna look through the predictions array and we're gonna find the index where the probability in that object is the same as the one that we isolated here. And then we are going to log the class that is related to that index or to the object at the index in the array.

[00:07:07]
So console.log, so we have again our predictions array, so predictions and we have an index. So top prediction index, that should actually be the object with the highest probability. But we want to just isolate the label and in the object, I think from what I can see in my code, that's called classname, not class.

[00:07:27]
So now, When that's done, then it means that what we're gonna log instead is going to be the actual label of the gesture that is the most likely to have been recognized by the model. So if you have written all this, then when we're refreshing our browser and my camera is on and i click start, what we should see in the, yeah.

[00:07:55]
Also, I left the other console a little bit, see, so I'm tilting right? So remember when I said the flipping the camera, I think we're probably needed to be true because that's a big word. Okay, so let's see if actually maybe by default, the webcam kind of flips you and you actually want it to be the other way.

[00:08:16]
So if you have the same issue as me, when we're declaring, i have it on line 16, but I don't know about but it should be like towards the top of the code where we're calling tmImage.webcam. You can change the last variable to or the last argument to true and if we save that and I'm also going to remove logging predictions because I just want to see the label.

[00:08:42]
I don't want to necessarily see the array of labels, so if we try again, [SOUND] Well, I mean sometimes ,okay, so if I do that right, yes, it looks right It doesn't look left too much. So it could be, again, we only recorded 30 samples ,that's not much but I've heard more chance sometimes.

[00:09:07]
So as it's getting confused, what we can do, what's cool to do as well with transfer learning like that as well, is that if you realize that your predictions are not super great and that you wish they were more accurate. You could actually, and we're gonna do that now, we can go back to Teachable Machine, retrain our two classes with maybe more samples.

[00:09:30]
So instead of 30, let's try to do like a hundred, like that should just take a few seconds and we can replace our, my model folder with a new one. And our code shouldn't have to change at all because as long as the name of the folder is the same, then the rest should be good as well.

[00:09:49]
So we can try to do that and see if that works, so if we go back to Teachable Machine. I mean, maybe for you, depending on what you recorded it actually works a lot better, but for me, it seems to have a bit of a issue. So if I use the same labels as like, you know I'm gonna start write and my camera is on and I'm gonna record more samples.

[00:10:18]
So it takes about 10 seconds to record about 100 images and then on right, left turn on the camera. Cool? Okay, so now I have 100 images for both, it should take maybe a second or two longer than before because we have more images. But if it means that our prediction is better then when you build an application, that's accuracy is probably what you want.

[00:10:47]
Okay, so here, right, the left seems to be less wobbly. So remember before when I was in the center, well okay, now that I just said that, it starts [LAUGH] being confused again. But now that we or that I at least re-recorded right left about 100 samples each, I can re-export, so download.

[00:11:10]
That's my number two, I'm gonna extract it, I'm going to open my files and I'm gonna delete the first model that I had, it's totally fine to not keep it and I'm gonna re-drag this one. I'm gonna rename it my model like I had before and now if I just re-run, I'm gonna stop Teachable Machine and if I reload the page again I did not change the code, I just changed them the model.

[00:11:44]
And if I start again, Okay, It's much better now, so now it's not going between the right, left, right, left, When I'm right, it's right, when I'm left, it's left. So maybe all we needed was just to record more samples, which it makes sense, but again, if you buy with recording 30 samples, it was enough, then that's great as well.

[00:12:05]
And you know, sometimes if I do this, then it might get a bit confused, because it doesn't know what this is, you just know right or left, okay. So just to recap, to kind of go over what happens, it means that first of all in less than 50 lines of code, you're implemented transfer learning.

[00:12:24]
So basically we start by writing the path where the model is and then we have our start button. When we click, we're initiating like our main function we are giving it the path to the model JSON file and the metadata and then the we don't need to give the path to the weights file.

[00:12:42]
It's more the model JSON file does that or the load function does that from getting the path to the model and the metadata file. And then Teachable Machine image library exposes a tmImage variable that you can then call webcam on it to initiate a webcam with dimension of what you see on the screen.

[00:13:04]
And here is the flip variable that we first set to false and it was weird so we can set it to true. Here the setup is asking you for permission to set up the webcam, play is to actually start streaming the webcam. And then we're appending the canvas on the page to be able to see what's coming from the webcam feed.

[00:13:25]
And then with request animation loop, we're calling constantly the loop function that's going to update the webcam frame and we're calling predict on every frame, basically. We're trying to see what's in this image, what label are we getting back. And in our predict function, we by now, it's expected that with TensorFlow.js it's usually a predict or detect or classify method and we're passing it the canvas as well.

[00:13:52]
And then we can do some kind of normal, object manipulation thing in JavaScript to get the maximum, probability and display that here as a log message, but otherwise on the screen. And from there, you can then build a bigger application where, for example, you could scroll with your head.

[00:14:13]
Where if the gestures that you record are like up or down, then you could have a machine learning model that if, let's say, you're eating a sandwich while you're reading the news and you can go up and down, and you can scroll with your head instead of putting your dirty fingers on your keyboard or something.

[00:14:29]
But you can think about whatever applications that you want to build with this. But it means that in a few minutes you can train a model with digital machine and in less than send 50 lines of code you can start to have it on a web page.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now