Check out a free preview of the full A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course

The "Sentiment Analysis" Lesson is part of the full, A Practical Guide to Machine Learning with TensorFlow 2.0 & Keras course featured in this preview video. Here's what you'd learn in this lesson:

Vadim demonstrates how to train a model to predict if comments are positive or negative, and explains that predictability depends on the given dataset and the preprocessing of the text.

Preview
Close

Transcript from the "Sentiment Analysis" Lesson

[00:00:00]
>> Now we can actually do the sentiment analysis itself. So Right now, let's see, we can use the IMDb data set once again. And in our training set there will be 25,000 feedbacks. We are gonna use them for training entries, and for the test entries it will be another 25.

[00:00:26]
Just for once. And the train examples look like this, right, very close. It's actually the text itself, that's what we have in our training examples and labels, yes. One for positive feedbacks and zero for negative ones. What we'll be doing is we'll just grab the pre-trained model and I can just download it from TensorFlow hub.

[00:00:55]
So let me actually execute this code. I think it should be fun to try it in life. And it's printing out the labels, yeah, exactly the same and, We can just download this from TensorFlow Hub and you can play there are a couple of other available. So, these pre-trained layers will then be talking to 16 fully connected neurons, right?

[00:01:27]
And at the end once again one neuron which correspond to positive or negative feedback and training should not take that long. Let's see, on my previous training, it was only one minute. So, I think we can spend this time training our model. So, right here, let's go quickly back to what we're doing here.

[00:01:52]
It is if you see those 20 dim, it means that we using word embedding with 20 dimensions. So it means that our embedding layer is the one I've downloaded from TensorFlow hub. It was pre-trained on some other big data set and try to once again, put those words into those 20 dimensions.

[00:02:18]
And try to, can I understand how words relate to each other, right, to find synonyms and so on, and so forth. And that help us to process the text and it is usually recommend that if you're trying to, for instance, create your own text analysis model, just use pre-trained embedding, especially embedding layers.

[00:02:44]
And, let's see. The accuracy we getting on our predictions right now is 86%, almost 87. Which is not that bad. And let's try plot the history just to see if we not over trained and I think we're performing quite well right. But at the same time, we almost plateaued at around 20 epochs.

[00:03:12]
So additional 20 was kind of overkill. I could have stopped my training at around 20 epochs and that should have been enough. And yeah, we can even do the prediction. So for instance, provided this feedback, right? And that was negative feedback. We can do model predict and then predict for instance, the first three comments.

[00:03:40]
So you can see that our model said that first two comments were negative and only the third comment was positive. So if I, for instance, print out so third comments is number two, because we zero in the space, right so this comments while pre arch but there's definitely.

[00:04:01]
Can we see that it's negative right from the get go? Probably not, okay. But yes, it is positive. And it was recognized by our system with almost 99% that it is positive. So, yeah, that's how you or you can actually do the predictions on your own phrases. Let's play around with it a little bit.

[00:04:25]
So, for instance, yeah, good movie but bad actor. So embedding, the best thing about the embedding is that it's not actually taking into account the word positioning. So a phrase good movie, but bad actor and bad movie but good actor, would provide exactly the same result. At the same time, capitalized good and lower case good, giving slightly different results.

[00:04:57]
Once again, everything depends on the data set and pre-processing of your text. So ideally, to do not distinguish between capitalized good and lowercase good. I should have lowercased all my training data set before doing all the trainings. But to take into account the order of the words in my sentences, we need to switch to a different polygenes.

[00:05:25]
So now fully connected neural network, right, the one we used at the beginning. Remember that it's connecting all the inputs to our neuron, right? So if you think about it, it simply losing the information about where the word was coming from. So that's why it basically doesn't take into account the word order.

[00:05:51]
But there are different models, for instance, recurrent neural networks. What they do, they actually trying to memorize the position where a particular instance we're at, and try to take into account the order of, in our case, words. And that's the last example in our text, Part C.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now