
Lesson Description
The "Improve Accuracy of Model" Lesson is part of the full, Hard Parts of AI: Neural Networks course featured in this preview video. Here's what you'd learn in this lesson:
Will summarizes why applying all the individual improvements at once leads to better accuracy in the overall system. However, continued tweaking is required to get closer to 100% accuracy. Steady adjustments to a model like this are the foundations of machine learning.
Transcript from the "Improve Accuracy of Model" Lesson
[00:00:00]
>> Will Sentance: So we definitely improved our accuracy, fantastic, but something's up. Well, before we go into what's up, first, figuring out learning our multipliers with a larger sample, we used independent checks. Independent means take just one of these. These are all zero, remember, and when let's just try tweaking this one up.
[00:00:25]
Did it make it net better? More conversions correctly. Converter work better. Kayla's nodding. It definitely did. Then we tried tweaking this one up. Did it make it net better? Did it make more conversions ff we had this as a higher number than zero?
>> Will Sentance: It did not. Kayla is showing her head, and exactly right, it did not.
[00:00:46]
So we said, okay, decrease the number there, decrease the weight. We then Jamie spotted for our bottom left corner here. Did it work better, up or down? And Jamie did a quick calculation holding everything else at zero and went okay, if that one's negative. Well, I think Jamie probably spotted the mirror the reflection.
[00:01:07]
But if that one's negative, that one's negative, that's good. If they're all zeros converting and that one's negative that's better. And then this one doesn't matter and this one's would be making it worse conversion, the negative one here, but two out of four and one no change improves the overall conversion.
[00:01:29]
As Jamie said. Then we did the other three, but we just kind of, I told you that that's the direction to go for each of those. All we did was exactly the same step, the same step we did here, here, here. We did it for these three. Always holding constant all the other weights, all the other multipliers.
[00:01:45]
But that surely can't be possible to then, having said, the best direction to go for if everything's zero, and we increase this one to one, it converts more of our sample correctly, therefore and then do that for this one, do that for this one, and then do them all at the same time.
[00:02:04]
Surely it won't work. It worked for one by itself, and it worked for this one by itself but that was going down when everything else is zero, not when everything else has also been changed. Surely we can't check the impact of each weight change independent like that and then apply it all together and yet we can and it's absolutely mission-critical to finding the right multipliers for any degree of size of sample that we are able to do that.
[00:02:36]
If we keep our tweaks small enough we can for two reasons. One, we keep them small enough, we avoid interaction effects. Even though we checked, does increasing this multiplier from zero to one convert more of our sample, the one multiplied by the one, the one multiplied by the zero, one more by the one, one more by the one, and everything else is zero.
[00:03:03]
Does it convert more of our sample? Yes, okay. Say for the middle bottom multiplier, increase it, then forget about it and get everything back to zero, including the one we just tweaked and try tweaking this one up. That made things worse. So let's assume we need to tweak it down.
[00:03:23]
It could be that we need to change it, but in fact, we checked again, it was tweak it down. Then the same here, then we didn't do these ones explicitly, but then the same here, this one would be, but always going back to all the other ones being zero.
[00:03:37]
It turns out, though, that due to what discipline of math where small changes and we study how rates of change had a reason about rates of change?
>> Speaker 2: Calculus.
>> Will Sentance: It's calculus. This is known as partial differentiation where we make small changes independent of each other of these multipliers we can do them all together and be confident that they are all still in the right direction of change.
[00:04:07]
And that is absolutely mission critical and is known as gradient descent. Taking independent steps towards improved conversion accuracy for our sample. Independent steps and then being able to do them all at once and know that even though independently they were the right direction, but you're like, are they really going to be the right direction altogether?
[00:04:34]
They will be, because of these two things we want them to be small, one is that avoids, it means that actually increasing this one here from zero to one when everything else is zero is the right move, but also increasing this from zero to one when they've all had their little tweaks, it's still the right move because the interaction effect is sufficiently small.
[00:04:55]
As in you might be okay, the one here is the right one to move, to increase but when I change all these, might this not have been right to move to one? Maybe it should have said it's zero. No, if a tweak is small enough, knowing which direction to go independently, we can then apply all at once and get a better conversion.
[00:05:13]
We went from zero, if you remember, to three. And if we keep going, maybe we'll get to four. Well, or maybe not. Two or b the reason we keep it small is to prevent overshooting. What if we had kept growing this not to one, but to 1.1. What would that have done to our converted output here?
[00:05:34]
Would it have overshot and no longer have met, met our target of one? Machine learning is steady incremental computer tweaking. After this first round, we're still not fully accurate. So what do we do? We do this all over again. Starting with 0, 0, 0, 0, 1, -1, -1, -1, -1, we will now assess do we want to increase or decrease this one holding all the rest of them constant?
[00:06:04]
And if we do, we'll hopefully improve our accuracy a bit more or maybe not, and that's gonna be a problem we're gonna spot in a moment.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops