Transcript from the "Project Metrics" Lesson
>> So we now have team metrics, which ladder up into organizational metrics, and you should have project metrics that ladder up into your team metrics, that then ladder up into your organizational metrics. Any project that you undertake should affect your metrics, and I don't feel super controversial in saying that.
[00:00:23] I mean, here's a good example. One of my projects was like, I just have to recruit cuz I have positions open and I gotta fill them. You can argue that eventually that's gonna affect signups and time to retention in all of these, because if I have more people working, at least it means all my projects can go faster, right?
[00:00:42] So that would be one of them, but anything that I work on should affect my metrics or I should change what my metrics are measuring. And I don't really feel like I need to caveat that at all, I think that's pretty unexceptional. If you're doing things at work that don't affect your metrics, then you're not accomplishing your goals.
[00:01:04] So we'll get into this once we get into writing product specs, but all of your projects is like, here are my personal project metrics. So I don't know, what's a good one here? Let's say I'm writing a quiz that asks you, what are your favorite movies? And so you say, I don't know, I like Fight Club and I like Gladiator and I like the Matrix, I'm a basic white guy [LAUGH].
[00:01:33] I like all those movies and I am a basic white guy, so that stands to reason. I could make one of my metrics on there, how many questions answered, how many people have signed up based on this quiz, how many people gave me their email? All of those various different things that are very specific project metrics.
[00:01:58] And that's fine, we should track all those. I should have a little mini dashboard of all these various different things that I'm tracking. But keep in mind that ultimately, all those project metrics are eventually going to ladder up into how many people signed up for the process, right?
[00:02:13] It's also probably going to adversely affect my bounce rate, right, which we set up here was probably a team metric, because the more clicks you have on anything, 100% of the time, you are going to get less people that are gonna finish it, right? So it's okay to trade those off, but be explicit like, we expect bounce rate to go down, we expect signups to go up, or we expect retention to go up, or something like that, right?
[00:02:39] But the key here is, my project metrics are designed specifically to measure things that then ladder up into my team metrics, which then in turn ladder up into organizational metrics. Does this hierarchy make sense? Cool, All right, another bone to pick, this is my favorite part of coming on Frontend Masters.
[00:03:02] I just get to rant about stuff that bothers me, and you all have to listen to me, [LAUGH]. Let's talk about CSAT. It's a lazy metric sometimes. People will say like, notice up here, I didn't even talk about customer satisfaction at all, but that's what CSAT is, right?
[00:03:22] How satisfied are people on my project? It is something that you should track, it is something that you should be interested in, you should have surveys and things like, hey, how was that experience? What do you think about it? Do you have feedback for us? But it's a relative metric, as opposed to something signups, that's empirical, right?
[00:03:45] We know exactly how many people signed up for our service and we know people on iOS are more likely to sign up than people on TVs, for example, right? Those are pretty apples to apples kind of comparisons, a signup is a signup is a signup, and a dollar is a dollar is a dollar, right?
[00:04:03] So, it doesn't matter, it's like, if I can push you to iOS because I know you're gonna sign up on that, I'm gonna do that, right? CSAT, it's a relative metric in that it's relative to itself, right? So I'm gonna give you some examples here, is like, if I asked you what was your CSAT on the signup and the billing process?
[00:04:24] People don't like giving you money, and so they're gonna give it a fairly relative score because the action that you're asking them to participate in is just not a fun action, it's necessary, right? But it's like how was your experience in giving me money? They're gonna be like, two, I don't know, it wasn't bad, but I didn't wanna give you money, right?
[00:04:43] As opposed to, what's your experience I'm watching this brand new episode of this really cool show? It's like, yeah, ten out of ten, loved it, right? So if you look at those metrics and you think that you're comparing apples to apples like, well, watching awesome show, ten out of ten, billing team, you're a 2 out of 10.
[00:05:00] Billing team, you're not doing great, right? So filling out forms is never fun. The best you can hope for is not painful when you're kind of in these not fun sort of things, right? Whereas the team watching the show, if it's not,, I love this product so much I just wanna explode from happiness, that team is objectively failing.
[00:05:24] To your point, how do you evaluate what's good? First of all, it's just hard, that's why I kinda rant on CSAT a little bit, right? But it is relative, like if you have a 2 out of a 10, and also when you get 2.1, it means like, okay, people were less hateful towards this, that's positive.
[00:05:44] They are slightly less remised to give me money, I like that, right? So, one, use it as a relative metric, measure your baseline of where we are today and then try not to slide back. That's generally the best advice I can give you about CSAT. The other thing that I'll tell you that a different PM told me is pair it with another metric that's closely related.
[00:06:07] So in billings, it would be like successful updating of billing information or successful accrual of payment information. That is also really useful, it's like, all right, we are raising how many people are successful at this and we are not lowering CSAT. That's actually a really useful insight of like, hey, we got better at this without making people hate it more.
[00:06:31] Cuz if you got way better at it, but you're also making people really upset, you probably actually didn't really help your product that much, right? So pair CSAT with something, that's actually a super useful way of looking at CSAT. And actually to your point of Europe versus the US, this is my favorite example of that, the exact same phenomenon.
[00:06:52] I used to manage seven SDKs across various different languages. Java developers are just salty people, I don't know what's wrong, they just don't like things. I was like, hey, here's a really useful way to interact with our services, is like, yeah, but we don't like it, we don't like you, and we don't like SDKs, and maybe we don't like Java, I don't know, right?
[00:07:13] But it was always, always super low, despite the fact that it was a really well crafted SDK, we spent a lot of time getting that Java SDK really good. Whereas the Node.js one, they were definitely the one that we focused on the least, and they're like, this is awesome, we love SDKs, ten out of ten, but it's seriously broken.
[00:07:31] It's like, I know, but everything else in Node is broken too, so it's fine, right? So you can't even compare them across things like that, right? So this is why CSAT is a bit of a problem. It can be a top level metric for you, just make sure you're treating it appropriately, pairing it with something else, and looking at it as a relative metric, right?
[00:07:54] So, Yeah, track it but like recognize the weaknesses of it. And I am not telling you to not track it, you should definitely track it. It's useful to see particularly as in a relative like, what is our CSAT over time? Usually that's how you're gonna look at CSAT and on a graph of like, what has happened over time?
[00:08:14] Because being nine out of ten stars, that can either be awesome or that could be awful. Actually I think a perfect example of this is Uber and Lyft. If you're not over a 4.3 on Uber and Lyft, they kick you off the platform, because they expect every person to get five stars every single time.
[00:08:31] That's why your Lyft driver asks, please rate me, is because if they don't get those five stars, they get kicked off and you're literally affecting their livelihood. So also be very careful when you give those people low ratings cuz it actually can directly affect their ability to feed themselves, right?
[00:08:47] So the worst I do is I generally don't rate people as they were actually dangerous, right? So in that case ,4.3 out of 5, most of us would be like, yeah, that seems like a pretty good number. If you are 4.4 on Lyft, you are in danger of being fired, right?
>> I feel like there's inflation in those.
>> There is, there's an expectation that we rate everything five out of five, right? Yeah.
>> Well, this is kind of a follow-up question just to formalize it. We were talking about it's just a built-in method to inflate a metric.
[00:09:21] Coming from education and health care, you see this done all the time, where you just pick a number, and one year, that's the right number, the next year, it's too low, the next year, it's too high. But it's like we've just blinded ourselves to the actual conversation. But what it is is you're kinda seeing just a self-adjustment to some arbitrary pin in the map.
[00:09:41] How do you approach that, because software, you have to introduce and manage the lifecycle of a lot more of these over time?
>> Yeah, [COUGH] it's, one, trying not to move the goalposts or be being explicit when the goalposts have been moved, and then having better metrics, right?
[00:10:04] A signup is a signup is a signup, that's why a signup is a really good metric to track. That's why none of my projects are raised CSAT, none of them, right? It's do this metric and hope we don't affect CSAT in adverse ways, almost always, right? Sometimes it's like, hey, people don't like our docs, we probably should figure out how to raise the CSAT on our docs.
[00:10:24] But even then, that's page views or time to complete intent or something like that. It's usually that's the metric first, and then we track the talent of CSAT. Actually, and another good point about CSAT, it takes a really long time for you to do something and then CSAT to change, right?
[00:10:43] That metric moves pretty slowly because, let's say, we're talking about docs here, right? So if I have bad docs, right, and I have one out of ten on my doc score, and then I do something and I change it, and all of a sudden, really good, all those people are gonna still be coming back and expecting bad docs.
[00:11:03] And if you ask them again after the change, they're gonna be like, well, it was bad before, I assume it's still bad now, one of ten, right, or two out of 10, because this one was okay, but it used to be really bad too. It's gonna take some time to get a bunch of new people in there that are not jaded and to also convert some of the jaded people.
[00:11:21] And there's also just still gonna be a lot of jaded people because it used to be bad, right? So that over time is gonna move really slowly. It's a tail indicator of actually how you're doing on those sorts of things. So that's why it's, you can look at on day one, I did this treatment and I got this amount of signups, and then on day two, I did this treatment and I got this amount of signups, right?
[00:11:45] You can point out the difference to those two, they're different, right? So just be really careful CSAT, that's my entire spiel there for you
>> What about NPS?
>> Yeah, Net Promoter Scores. It's kinda the same thing, it's basically the same. We tracked NPS really carefully at Microsoft, it was one of the big things they did track.
[00:12:15] That one aims to be a bit more apples-to-apples of like, hey, NPS is this, Visual Studios is this. But VS Codes is always higher because it's free, and people are like, I don't wanna pay for Visual Studio, everything else is free, why do I have to pay for that?
[00:12:32] 70 out of 100, right? I don't remember what the numbers were. But I kinda had the same feeling on it, it's relative to the product itself. It is slightly more useful in comparing some things to each other, but I still view it as, pair it with something that's gonna describe what's actually changing in the product.
[00:12:51] And try less to affect the NPS directly and try and make the product better. That's hard.
>> Yeah, the way I used NPS in the early days of Frontend Masters was asking everyone, of course, and then if they rated us a five or a six, we'd be like, what could we do to get a seven or an eight, or if they rated us whatever.
[00:13:15] It was just like trying to get two points up and asking them that follow-up question, they gave amazing feedback. I had probably 5 or 700 conversations-
>> About that.
>> Just by asking them one simple follow up, what can we do to get a nine or a ten if they rated us eight?.
>> Yeah, I mean your point there, when gathering CSAT scores, it's usually good to give them freeform feedback or some sort of feedback. Like you gave us a six out of seven or whatever, and here's like, why did you give us the score? You will get frequently a lot of really good feedback.
[00:13:55] You will also get a lot of garbage feedback. A really good example of this is the docs on Stripe in particular is like, all right, here's the docs, what did you like about this docs page? And people will be like, I hated this docs page. And why did you hate this docs page?
[00:14:10] And it's like, Stripe withheld my payments, right? Or I got flagged for fraudulent payments, which is like, that has nothing to do with our docs, right? So some of those can end up being bad, but the freeform feedback will frequently end up with, hey, I got stuck on this portion of the doc, and it was really hard and I did this and it worked.
[00:14:31] I would have wished that you would have just called this out really soon. It's like, that's golden, right? And you can immediately act on some of those like that, but yeah, customer feedback, it's extremely important and also difficult.