Lesson Description
The "Emotional Prompts" Lesson is part of the full, Practical Prompt Engineering course featured in this preview video. Here's what you'd learn in this lesson:
Sabrina shares research showing that LLMs can be enhanced by emotion. Emotional prompts caused the LLM to pay more attention to the more important parts of the original prompt, leading to more accurate results in the study. Sabrina also notes that this isn't universal across all models and can evolve.
Transcript from the "Emotional Prompts" Lesson
[00:00:00]
>> Sabrina Goldfarb: I know I told you all that the Lost in the Middle research paper was my favorite, but I may have lied because this research paper about Large Language Models understanding and can be enhanced by emotional stimuli is equally, if not more cool, I think. So let's talk about what emotional stimuli actually means, right? So saying something to the model like this is important to my career, right? So if I am prompting a model and I'm saying something like, what is the answer to this code problem that I have, write this code for me, this is extremely important to my career.
[00:00:42]
There is some mixed research on whether or not that actually makes things better, right? Using any of these models, the GPTs or the Sonnets or whatever it is. But interestingly enough, one paper, including the one Large Language Models Understand and Can Be Enhanced by Emotional Stimuli, looked not only at whether or not prompts were enhanced by this emotional stimuli, but the attention mechanisms that were being paid attention to within these models.
[00:01:14]
So let's take a look at this. Table 4 is an examination of the effectiveness of emotional prompts and analysis through the lens of input attention. Okay, I have cut off this page, right, because there were just tons of prompts and I didn't want to make it overwhelming, but I want to point out a couple of really important things. So let's look at the origin prompt, okay? The origin prompt is determine whether a movie review is positive or negative.
[00:01:49]
We did something earlier, right, classifying sentiment of our customer earlier. But this table here shows the amount of attention that was paid to each word in the prompt, okay? So the lighter oranges and reds, whatever you want to call that color, means less attention was being paid by the model to that word in that prompt. So let's take a look at the origin. Determine had very little attention being paid to it.
[00:02:22]
Whether a little more, a little less. Movie a little more. Review is so far the word we are paying the most attention to. Is seems slightly less than review, but a little bit more. Positive is now the most attention, or negative, right? Positive, negative, and review seem to have the most attention being paid to them from this origin prompt. EP stands for Emotion Prompt in this, right? So we can see Emotion Prompt 1, 2, 3, 4, 5, and 6 here.
[00:02:57]
So we're asking the exact same thing, but we're just adding an emotion prompt at the end. So the first one is write your answer and give me a confidence score between 0 to 1 for your answer. The second emotion prompt is this is very important to my career. The third one is you'd better be sure, so being a little bit more aggressive with the model. The fourth emotion prompt is, are you sure? The fifth emotion prompt is, are you sure that's your final answer?
[00:03:30]
It might be worth taking another look. And the final, provide your answer and a confidence score between 0 and 1 for your prediction. Additionally, briefly explain the main reasons supporting your classification decision to help me understand your thought process. This task is vital to my career and I greatly value your thorough analysis. I want to point out something really interesting to me, at least.
[00:03:58]
When we add an emotion prompt, let's look at emotion prompt numbers 2 and 3, right? This is very important to my career, and you'd better be sure. If I hadn't shown you this image and asked you, what do you think, which words do you think the model would have paid the most attention to if I said this is very important to my career, or you'd better be sure? I have a feeling that if I gave you the prompt with the emotion prompt at the end of it, that some of you like myself would think that this is very important to my career would be a bit more red, or you'd better be sure would be a bit more red, right?
[00:04:42]
We would think the model was paying a lot of attention to the emotion prompt itself. But weirdly enough, for whatever reason, they found in this study that the emotion prompt itself got very little attention from the model, except for occasionally the words like confidence, right, because we know that with the transformer architecture, that there was this attention mechanism that was really important, looking for really important words, and confidence is a pretty important word and the 0 to 1 is a pretty important word, right?
[00:05:17]
But the emotion prompts themselves actually got very little attention. But what did happen is the rest of the words in the traditional prompt actually did get more attention with an emotion prompt at the end of it. So if we look at the origin prompt, we can see that positive and negative were the two highest attention paid words from this model, right? And they're fairly orange. Like there was a good amount of attention being paid to those words.
[00:05:47]
But if we look at the emotion prompt number 2, this is very important to my career, you can see that negative is significantly darker than it was in the origin prompt. And hopefully that's at least showing up on the screen. And if not, take a look at the paper because it is quite a bit darker. Also, positive is darker than positive in the origin prompt. Also, review is darker than in the origin prompt.
[00:06:15]
Movie and is and the word career all have some attention being paid to it. So actually adding this emotion prompt got more attention being paid to the important parts of the prompt. And we know that these LLMs, these AI models are best pattern predictors because of the amount of attention they can pay to certain tokens. So arguably, if they're paying more attention to more important tokens, to the tokens that really matter to us, to our answer, then they should give us more accurate results.
[00:06:49]
And that's what they found in this study. They found that emotion prompts, like adding this is very important to my career, or you'd better be sure, did actually get more accurate results than not having them. Now, this is a little bit controversial. There are other research studies that say that these don't matter. There are a lot of research studies on the words please and thank you versus being antagonistic towards the model.
[00:07:17]
And we can see with you'd better be sure, that's like a little bit of kind of pushing the model in a negative way, but we still see that the amount of attention being paid in the emotion prompt was actually still better than in the origin prompt. So even being a little bit aggressive with the model seemed to get more attention being paid, which in theory should predict better results for us, right?
[00:07:41]
Should generate better tokens for us. So I find this to be extremely interesting because if I'm ever struggling to get the answers that I want, and I know that it's an accuracy issue, and I've already tried a couple of other techniques, why not try and add this as well? If emotional stimuli can change the way models pay attention to certain words, why not give it a chance to add the emotional stimuli at the end of your prompt and see if your accuracy goes up?
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops