Check out a free preview of the full Machine Learning in JavaScript with TensorFlow.js course

The "Image Model Layers" Lesson is part of the full, Machine Learning in JavaScript with TensorFlow.js course featured in this preview video. Here's what you'd learn in this lesson:

Charlie codes the layers for the image model. Rather than chaining the layers inside the TensorFlow sequential method, the "add" method adds layers one by one to the model. Some advice for selecting and experimenting with alternative layers is also included in this lesson.

Preview
Close

Transcript from the "Image Model Layers" Lesson

[00:00:00]
>> So at this point, cool, we're loading our data, but now we need to actually feed it to the model. So we're going to go to our create model file, we are going to, yes, create our model, and the next step after that will actually be, we'll be do doing the whole training in the separate file.

[00:00:25]
But to create our model, we're going to start by requiring TensorFlow.js. So const tf = require('@tensorflow/tfjs, and after that we have to declare a few variables. So first of all, we're going to declare a variable called kernelSize. Kernel_, or, actually, we can just write it like this to keep it consistent.

[00:00:55]
And the kernelSize can either be an integer or an array of integers. And to try to explain what the kernelSize actually does, I found a resource that probably explains it better than me. So when we have a tensor that has a certain shape with like a lot of data, the kernel is basically an array.

[00:01:15]
So here we have like an array of 3 by 3, so it's looking at the data and it's transforming it into like a smaller output for the next layer of the model. So instead of doing always like 28 by 28 data and then pass the same amount to each layer, the kernel is actually kind of like a window, and you look at the data and you kind of represent it in another way.

[00:01:35]
So it's okay if you don't necessarily know why or how, or things like that, but it's just a way to transform the data from one layer pass it to the next. And we are gonna have a couple of layers that will have filters to transform the data from one way to another way.

[00:01:51]
And then it's like a way to find patterns into data later on. So here our kernelSize, I personally worked with an array of 3 by 3, so as I was showing you in the example, it's gonna be kind of a matrix of 3 by 3. So it's gonna look at our input tensor and just kinda do like, kinda reducing that 3 by 3 into another, maybe a single number for the next layer.

[00:02:15]
So that's our kernelSize, then we can create a variable called filter. So call it filter. We're gonna set it to 32, and this thing is that, when we're gonna create two of our layers in, our model is going to be a layer called Comp 2D. And filter is actually a required variable.

[00:02:37]
So when you're going through the steps in your machine learning training process and each layer, a filter variable here is gonna be the amount of filters that the data go through until the next step of the training process. So I don't know exactly internally what it does, but this number can be any number, I also tried the number 20 or 10 or things like that.

[00:03:01]
So sometimes you can play with these values to see if it affects the accuracy of your model. So if you want to write something else than 32, it should also work as well, but it means that this is going to apply 32 types of layer or filters in one layer when it takes the data from one layer to another.

[00:03:23]
And then we can have, we're just gonna have, just to make it easier, a variable called numClasses. And just we're gonna set it to true, so that if later on, you decide to take the end result of this project and add more types of drawings, you can just change the variable here instead of figuring out where it's used, we'll use it later.

[00:03:50]
Okay, so we're just starting with these variables, we might add a few later on, depending on the layers that we decide to add to our model. But here we can start to declare the model. So we can say const model, we're going to use a sequential model again.

[00:04:05]
So tf.sequential, and sequential just means that you have layers that are added one after the other, and the data and the process of processing the data goes one layer by layer. So you can declare it like this, and then we can add individual layers. So yesterday, if you remember, we declared an object like this with layers and an array of layers, but there's another way to write this as well, because sometimes if you add a lot of layers, it can start to be confusing.

[00:04:32]
So what you can do is just call tf.sequential, and then you can call an add method. So we're gonna add a first layer, I'm gonna call tf.layers.conv2d. And inside we can pass different parameters. And what I was talking about yesterday with the shape ,when we resized from 200 to 28 by 28, this is what we're going to do here as well.

[00:05:00]
So the parameter for this is called inputShape, and it's gonna be an array of 28 by 28, cuz this is what we resized our images as. But there's going to be a third number, that's going to be 4. And I believe that this is to represent the fact that the, when we're analyzing images, you have a value for R, G, B, and A, and this is like four values.

[00:05:23]
So here we're basically having an image by 28 by 28 and another dimension of four values, one for our R, G, B, A. So our model is going to be expecting that kind of data. And then we have the parameter filters, that is required. And this is going to be our filter variable.

[00:05:46]
And then we have also our kernelSize, that is also required here. So we can just pass it like this, cuz we already had the variable. And then we have something that we talked about yesterday as well, that's the Activation function. And here we're going to use the same thing that we used yesterday.

[00:06:03]
And one thing that I actually researched this morning is that you do have two other types of activation layers that are Sigmoid and Softmax. But apparently, if you want to use, Softmax, it's usually used at the last layer in your bottle. So here because we're in the first, it's apparently recommended to use ReLU activation.

[00:06:24]
So we're going to use that one. And at this point we have our first layer of our model. It usually wouldn't just stop your your model with one layer, so we're gonna add another one. So what we can do is, we're going to add a new layer. And I'm actually gonna have to read the docs, cuz I forget what this one does, but we can do model.add, and then again, we have tf.layers.

[00:06:54]
And here we're gonna do maxPooling2d, which I think is used as well for images, and that's probably why I used it, but I forgot exactly what it's used for, so I can look at the docs in a bit. But the parameter that we're gonna use in here is called poolSize, and that can be, I think I had an array of 2 by 2, so it might be doing just another windowing thing.

[00:07:22]
So we can do 2 by 2 or you can declare it as a variable, where we had the kernelSize here. So we have a first layer, that's a conv2d layer, that is expecting the shape of our images, and then we have a maxPooling2d. And then what we're going to do is we're going to flatten this data because the next layer is going to be a dense layer and it's expecting more of like a one-dimensional input.

[00:07:49]
So we can do model.add, and we have tf.layers again, .flatten, that is a function that you call. So this is going to basically take the input, no, yeah, it's going to take the output of the previous layer into it and it's going to output a flattened tensor. And the reason we do that is because I think that in the next layer, we're going to create a dense layer and this is going to expect a flattened tensor.

[00:08:28]
So if it doesn't make a lot of sense, it's okay, this thing is, if you're not doing it every day, and I do it every day. Sometimes, I even forget what these things do, but you can play around with different things. So for example, if the next layer was not a dense layer, I don't think you would need this.

[00:08:47]
But let's try and see what the accuracy of our model is going to be later on, and then we can play and even maybe look at the docs and maybe try a layer I've haven't used before. So as I was saying, we're flattening the output of the previous layer, and we're going to create a dense layer.

[00:09:05]
So we're gonna add model.add, and here we have tf.layers.dense. In terms of unit, again, yesterday I mentioned what these units here, units means the number of outputs that are going to be returned. And here I have the number 10, but I can see from my notes that before I tried another number as well.

[00:09:29]
So again, depending on what we pick here, it will affect the result, but we can play around a little bit. I`m gonna have another activation that's gonna be a review as well, cuz we are not at the last layer. And maybe you're gonna add one more and that's going to be our last layer, so we're gonna change our activation.

[00:09:49]
So if you copy the previous line, so still, model.add and then dense. But because it is our last layer, if you remember, the units needs to be the number of labels that we're expecting to be returned. So here I had created before my numClasses, so here instead of 10, we need to return the classes.

[00:10:08]
So if you had kept 10 here on your last layer, it would complain, because it would be like, well, you're expecting it to have like 2 layers but you return 10. And the activate, oops, that typo, activation, activation. And because we're at the last layer, you could leave ReLU if you want, but let's spice it up a little bit and write Softmax.

[00:10:33]
Okay, so at this point, I did one, two, three, four, five, all right, five layers. I think it's pretty good. There could be a lot more, you could have less, but then it means that the data, it goes through less processing. So we're gonna pause here. We're not done, there's gonna be a few lines, but we can get back to it in a few minutes.

[00:10:52]
Cool, so now that we have created our sequential layer and we have added a few layers to it, we need to do a step that we also did yesterday, where we need to actually compile it with an optimizer function. So first, we can declare an optimizer variable. And we're gonna use the same one that we used yesterday, so tf.train.add.

[00:11:19]
And what we pass into it, if you remember, is the learning rate. And here, I'm gonna use the same value as I used yesterday, but you can tinker with it as well later, and it was 0.0001. I made some changes this morning, trying to play around with a few different values, and we can do that a bit later.

[00:11:41]
First of all we are going to finish what we're doing and then if the accuracy is not good, we can play around with different values in here. But once we have the optimizer, we need to call the Compile function. So, model.compile, and it requires an object as parameter, and we have our work passing out optimizer.

[00:11:59]
And then if you remember, we had like, oops, loss parameter as well. We're gonna use the same one as yesterday, categoricalCrossentropy. Another parameter that we're going to have as well this time is matrix, and we want to get the accuracy. So whenever we run our model, it usually kinda returns just the loss.

[00:12:23]
So you can see if it goes down then you know that your accuracy should go up, but here we're also going to expose the accuracy as well. So that then you'll see when we're gonna run this all together in a terminal. I'll show you your CLC, you'll see, we'll see the value of the accuracy that will change over time as the model goes through different steps.

[00:12:44]
And it will make sense when we actually run the whole thing. But here at this point also, again, we're in node.js, so we're going to do module.exports, and we're going to export our model. So in this file, we're not training it yet, we're just declaring our model separately.

[00:12:59]
I thought that it would be a bit easier to make changes to the model if it's like separate from having everything as different files, yep.
>> Kind of a general question about layers, are there any strategy to how you kind of select an add layers for your own project or is it more of a trial and error?

[00:13:19]
>> So to me, in general, it's more of a trial and error that I've seen, I think there's a few things that I know need to work a certain way. So for example, I just know that in terms of units, the number of units in the last layer needs to be the number of classes that you're using when you're training.

[00:13:34]
So that I know, but instead, like then for example, Max Cooling 2D, I don't think I've used it before, but I thought, let's try and see what it does. And here we're working with images, so, I thought a conv2d layer would be good. But to me, it's more of a trial and error, I think that if I was doing machine learning really like every day, there was probably strategies, I just don't know them right now.

[00:13:59]
So if people, after this workshop, end up doing a lot more machining, maybe they'll realize that, actually, we should have used another activation function or something like that. So, I do believe that even for people who do it every day, sometimes there's a bit of tweaking anyway, in terms of, for example, the units here, I picked 10.

[00:14:16]
But I think you could definitely just pick 20 or 5, and then see what happens. So you'll tweak it and see how it affects the accuracy of your model. So here, even in terms of numbers of layers, here we have five, but I think in my original example, I think I had seven but I was like, let's try to mix it up a little bit and see what happens.

[00:14:37]
So we're gonna do that and we're gonna run it. And then, if it actually comes back with a good accuracy, we could stop there, we don't have to add more layers. And if we realize that, the accuracy is less than 0.5, then there's multiple things that we can change then.

[00:14:52]
It could be the number of layers, it could be the type of layers, it could just be that there's not enough data, it could be a lot of different things. So that's why there's not a set kind of recipe on how to get great accuracy. Or maybe there is, but I just wouldn't know it then.

[00:15:09]
So at this point, I'm just gonna save my file. So here in this file, we created a sequential layer, we added a sequential model, we created layers and we compiled it, and we're exporting it to be able to use it somewhere else.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now