Check out a free preview of the full A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course

The "Training Models" Lesson is part of the full, A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course featured in this preview video. Here's what you'd learn in this lesson:

Vadim explains that in machine learning, data is fed to a model, processed through a CPU or GPU, then the data is frozen or stored once the training ends. Afterwards, a new image is fed to the model, and with proper training the model should be able to recognize what the image represents.

Preview
Close

Transcript from the "Training Models" Lesson

[00:00:00]
>> Let's kinda step back and think about it abstractly. Imagine that we have some compute blackbox, right, some compute. And it can be your, I don't know, CPU, GPU, TPU, FPGA, I can probably talk about those later. But it just doing some calculations. So what kind of calculations, what kind of operations it is doing?

[00:00:34]
Well, it depends on what you provide to it, right? So we need to provide the algorithm. Or just model something, right? The least of the instructions for your compute to process, so let's just draw it as kinda couple of lines. That's what our compute will be doing, right?

[00:00:58]
This is our algorithm, and we're just executing it. To get something useful, we also need to provide input. So it can be input data or some text, right, and then this input data is processed. We'll get some data out. Something will be printed out, or we can store something on disk, right?

[00:01:23]
But that's kinda conceptually what the computer science is teaching us about. We have some computes, we're providing the instructions for that compute to follow in form of algorithm. We're providing the input data in and getting update data as the result. In machine learning, Basically, we can see that those two, Are interchanged.

[00:01:53]
Meaning that, we're just simply grabbing, Output result when we, sorry. What we're doing in machine learning, we're just taking those output data when we're training our model, And providing it to our compute. Also to use, Right? So remember on the previous diagram, pretty messy one. [LAUGH] But when we did for propagation, we needed to know if we're looking at the hot dog or not hot dog, right?

[00:02:39]
So we needed an annotated data set. Meaning that, we are not only need thousands of those images with hot dogs and not hot dogs, but we also need to know if we're looking at the hot dog or not. So back to our schema, the one we just drew, It means that's to our compute when we're training the neural network, we're providing images, let's actually switch, images of the hot dogs and images of not hot dogs.

[00:03:18]
But also, when we deem this training, we should say what is the label, right? So, hot dog or not. And our compute, when just readjusting those weights and biases, It will somehow return back or train this model, right? So conceptually, that's what machine learning is doing when you're doing the training.

[00:03:46]
Provided the input data and corresponding labels, it will retrain or adopt the weights and biases to return the correct result. But what you're getting is the model, all of those weights and biases. So when the model is trained, now you can actually freeze them, right, and do the predictions.

[00:04:10]
If you're looking at the hot dog or not hot dog by just putting some new image our model has never seen before. And it will just do the prediction, correct prediction if it's looking at the hot dog or not, by utilizing all of those previously reinforced connections between the neurons.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now