Check out a free preview of the full A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course

The "Deep Learning & AI" Lesson is part of the full, A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course featured in this preview video. Here's what you'd learn in this lesson:

Vadim explains that when thinking about artificial intelligence, engineers assume that the computer is passing the Turing test and is able to be creative. Once information is processed through deep learning, artificial intelligence determines how to use the data. An example is given of an image's style being processed through deep learning and AI applying that style to another image.

Preview
Close

Transcript from the "Deep Learning & AI" Lesson

[00:00:00]
>> So back to our discussion. I said that statistics, historically transition to machine learning and this transition was, I'd say basically with the increased complexity. While statistics is trying to simplify everything by reducing the information you have. And when I'm talking about information, let's say, we asked 100 people about their shoe sizes, right?

[00:00:28]
But reduced all of this information to just average size, so that was the reduction. Machine learning is operating with the distributions. So for instance, machine learning can help you to actually find not just only the average but the distribution of the sizes, right? So for instance, if that's the size and let's say we can somehow figure out that we do have two local maximus, for instance, at size eight and at size 11.

[00:01:03]
In this case, probably it's better for me just to well, create shoes of that size eight than size 11. So machine learning is kind of doing this distribution analysis. And as I showed deep learning simply was a progression of machine learning. So we're just playing around with a lot of those neurons.

[00:01:30]
We figure out the way how to, we'll learn this information, transformation. If you think about it conceptually even with deep learning, we're doing this information transformation. So our input signal was the image itself, right? It was those old pixels, intensity on the pixels. But in the end, we reduced or kinda shrink this information to just one beat, with a label like if we're looking at the hotdog or not hotdog, right?

[00:02:04]
Then that will be just your last neuron activated or not. So deep learning is doing that, but at the same time when we talk about artificial intelligence, AI. At least when I'm talking about AI, I'm thinking something towards like Turing test. I want my computer to be able to think.

[00:02:24]
What does it mean to think? It means to somehow, maybe even create things, your creativity is definitely something we associating with the intelligence. But our deep learning models right now they do not create anything, it's simply kind of reducing the formation, right? So we taking the images on down just say, if we're looking at the hotdog or not hotdog worse, is it a spam or not spam if we're providing emails.

[00:02:56]
So the next step with AI is what we do with those deep learnings. So for instance, remember with deep learning we have this almost triangle diagram, right? Where we just provided the inputs, some inputs and then reduce it to the one neuron, or several neurons. Interestingly enough, you can do kind of the reverse operation.

[00:03:23]
You can simply take this neuron and by providing some of the informations we collected in those layers. You can recreate some things. So for instance, you can trim your model to redraw images for you. And one interesting example that's the notebook we have actually. And you know what, I will switch to it and demonstrate to you exactly what I mean.

[00:03:59]
So in our notebooks, if you look at creating stylist for images, our notebook 0104. We can even execute it right now. What we're doing we simply grabbing pre trained model and we will need to input images. So let's say we can just take downtown Minneapolis, right? That's the image.

[00:04:34]
And style image right here. And what will happen, let's go back to our diagram. When we'll be training our model on our stylized image, we can just see what neurons were activated on the first lower levels. Maybe couple of first layers. The idea is that our neurons might somehow graph the style of drawing from that original image, original picture, or drawing in our case.

[00:05:11]
But then we can use second picture to do kind of the same thing but capture the weight and all the activating pathways from the last layers. And then when we are gonna be recreating the image, we'll just substitute those first layers where we have the information about the style.

[00:05:36]
And we're gonna put it right here. So it means that the main information kind of what objects we're looking at will be preserved from the original image, in our case, downtown Minneapolis. But this style will be modified to whatever style we have in our drawing. So the end result will basically be stylized picture of downtown Minneapolis.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now