Check out a free preview of the full Machine Learning in JavaScript with TensorFlow.js course

The "More Ways to Train Models" Lesson is part of the full, Machine Learning in JavaScript with TensorFlow.js course featured in this preview video. Here's what you'd learn in this lesson:

Charlie spends a few minutes sharing other ideas for training models. Machine Learning can be combined with hardware or other platforms to create more creative use cases.

Preview
Close

Transcript from the "More Ways to Train Models" Lesson

[00:00:00]
>> And at the beginning I talked about when you look at teachable machine on the homepage, at least, we worked with this one, but there's also an audio model. And I can show that quickly. We don't need to implement it because the second part of the project is going to be a bit longer.

[00:00:16]
A question?
>> Is there a rule of thumb when training these models like how many samples. For instance, someone said they did 120 images, and it seemed like the result was roughly the same as what?
>> Sure.
>> You did in 30.
>> So the thing is because we're using the camera as well, it also depends on what's in your backgrounds or the lighting.

[00:00:39]
For example, if you try to teach a model and let's say you're in like a dark room or something because it's making predictions based on what it is in pixels. If you can't really make the difference between your background or your movements or something, sometimes you will have more troubles not predicting something.

[00:00:57]
But in terms of number of images, it will depend on the difference between your gestures. So as I said before, let's say, for example, like I put my hand here, and then I just go a little bit up. There's not that much a difference between the position of my hand.

[00:01:13]
So you will need more samples for the model to be able to really see a correlation between like, well pixels are and what label is associated with it. So I don't know necessarily the context in which other people are training their samples here. But it could be a bunch of different things.

[00:01:32]
And usually I would say more images is better but it will depend if there's a ton happening. Maybe in the background is like colors everywhere and things you might need more samples so you can really isolate you from the rest, but I would say it depends again. [LAUGH] Sorry, you had another question.

[00:01:52]
>> To that point, I ran into a little anomaly where I did winking left and right eyes. And I didn't realize that when I recorded the right eye my head was closer to the right of the frame. So then when I would go to test it I didn't blink my left eye my head was close to the right of the frame you would think I was doing the right because I'm guessing more pixels were changed.

[00:02:12]
>> Yes, exactly, so that's also the kind of tricky thing maybe with transfer learning is or at least with images is that sometimes if you want it to work, you have to be in the exact same spot. Or if I'm training it on a white background, but then I go and I train it in a kitchen or in a park, it will be a little bit different because it's not isolating you necessarily.

[00:02:36]
There's no real body recognition because it could just be my hand without my body and it would still have to learn what's going on there in terms of patterns with pixels. So yeah, it depends. Sometimes you could also be training it and you're closer to the screen than other times.

[00:02:53]
So the best thing to do in that case is, first of all, you need to probably understand in which context you want people to use your app. If you know that they are gonna be outdoors in the park, then maybe you want to train your model while being in that environment as well.

[00:03:10]
Or you also have a lot of different samples. So you could have, you could record it like here, and then you go somewhere else and you record the same thing. So that you give it enough samples so that the model can try to figure out what's the thing that's like similar from all of these images.

[00:03:27]
If I go in a park and I'm like this, and then I go in the kitchen and I'm like this, same thing. It's easier to see the pixels that are similar, rather than like the ones that are different. So yes, in this workshop here we started with 30, wasn't quite right, then 100, it worked for me.

[00:03:42]
But depending on the use case and the environment, you might have to do a lot more. So if you think about production application of like machine learning models, they are trained with like millions of samples. So if you were maybe recording yourself, like a million times, then that would work better.

[00:04:01]
But just to show you how it works, we maybe don't have the time to train a million samples. But in general, more samples is better than the quality of it matters. There's another tool called Tiny Motion Trainer, and that is using Arduino boards. So if you don't know what Arduino is like, that's a drawing of it.

[00:04:22]
But it's like a mini macro controller, and there's one specifically that works well with TensorFlow.js. So it was kind of like, I think, a collaboration between the two. Where now instead of normal TensorFlow.js models or TensorFlow models, you have TensorFlow Lite, so they're even lighter to be able to run on macro controllers.

[00:04:40]
And what that allows you to do? I don't have my device, right now, but it's also a UI. I'm not gonna be able to actually show you what's behind or actually proceed without committing. So what's happening in that UI is that if you do have this little device, I think that's an Arduino BLE Sense or Arduino IoT Sense, there's two of them.

[00:05:02]
And what you can do is just that. So you connect your Arduino, you connect it to the browser using the web USBAPI. And you pick certain settings, the threshold of difference of movement, the number of samples, the delay between captures, and you start doing the same thing that we do in teachable machine.

[00:05:21]
But with data here, so if I wanted to do a punch gesture, then I would create punch and then I would start recording the same way. And it would actually instead of taking snapshots of webcam, it would take live feed from this little device that has an accelerometer and a gyroscope.

[00:05:37]
And while I do a punch gesture in the air, it would record all that live data and you will be able to do that with multiple gestures as well. So that's kind of what I was talking about, where, if you're depending what you're building, you might not want to work with transfer learning from like images.

[00:05:53]
You might want to do transfer learning from motion data. And that's actually this tiny motion trainer application is what I use. No, not that. Is what I used to do the street fighter, like the air street fighter thing. I built this thing in multiple ways. No, that's brain sensors, not that one.

[00:06:15]
This one. So the way that I built this was like four different ways. So using four different sensors. But you could rebuild that using tiny motion trainer where you have little Arduino. And then I record it a few times like punch, punch and then hadouken the other one.

[00:06:32]
And using training relatively quickly a machine learning model in the browser. Then it is uploaded onto the Arduino via Bluetooth, which is really cool as well. And then it means that once you're actually connecting your Arduino and your custom JavaScript application, you're able to get a label and also have that on the screen as well.

[00:06:52]
So a little bit different. We're not gonna do it in this workshop because it requires a microcontroller that you might or might not have. But it's definitely something that works as well and they also have a very nice UI to help you get started. Hopefully that helps if you're building something and the webcam isn't the right media for you.

[00:07:13]
This?
>> Arduino has its own framework, right, too for programming those devices?
>> So usually It's written in like C or like the Arduino language. That's kinda like a little bit different. You can do Arduino stuff in JavaScript, but the JavaScript part doesn't run on the Arduino. Usually you have to install another thing called Firmata that then kinda like transcribe your JavaScript code into what can run on the device.

[00:07:42]
So it's usually a little bit. There's more delay because if you're writing write in C, it runs on the board and it's like super optimized. If you do Arduino stuff in JavaScript, it works. I've done it multiple times, but it's you can't do applications that are necessarily very time sensitive.

[00:08:03]
But for, again for [SOUND] for this project I built a version with an Arduino that was not the one to use with Tiny Motion Trainer. I think that was a new maker 1,000. And the way that I did that is I created my own gesture recognition model. And hopefully you'll understand a bit more in part three or project three, how to do this.

[00:08:27]
But I recorded raw data a lot of times doing different gestures. And then I built a model in using Node.js, and I plugged in, it was Johnny 5 is the library to use with Arduino and JavaScript. And then sort of getting live data using 25 in JavaScript, recording it, everything, all the files locally creating the model, and then having WebSockets to stream it back to the right.

[00:08:53]
There was a lot involved but it's possible.
>> Yes.
>> We do have a course on Arduino.
>> Perfect.
>> And it's fairly recent too.
>> Okay.
>> Today and everything so using the default kinda kit you can buy off Amazon and
>> Yeah.
>> He programs sensors and stuff.

[00:09:12]
>> Yeah.
>> Like one that does light mode and dark mode on a web page with turning off the lights with a when you turn the
>> Yes no, that's that's super cool. There's, again, here in this workshop I'm talking about machine learning, but you can definitely mix and match a lot of different technologies.

[00:09:35]
I mean with hardware, you can also learn about the web USBAPI or web Bluetooth. And you can get to learn more about these web APIs that are available and they've been available for like a while. But usually, I guess they're more used in more kind of creative experimental stuff.

[00:09:53]
But I really like to mix and match.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now