Transcript from the "Code Coverage" Lesson
>> Kent C. Dodds: So the next thing that I wanna jump into is, I skipped over this earlier. And I think that it's probably still relevant to everybody so I wanna take a quick break into, from exercises in coding and talk about code coverage. So this project is keeping track of code coverage, and we're not gonna talk about how to configure tools to make that happen.
[00:00:52] So you can actually pull that up in the browser if you run Open, Coverage, lcov-report, Index.html, then you get this beautiful coverage report. How many people have seen this before? Okay, a handful of you. So here at the bottom it says generated by Istanbul, you can click on that to learn more about what Istanbul is all about.
[00:01:14] It's pretty cool. So let's take a look at some of the server side stuff. If we look at our controllers, we're in server source controllers. These are actual files that have various levels of coverage. If it's all red that means it's not covered at all. So here let's take a look at one that has a little bit of coverage.
[00:01:36] So the highlighting here is indicating what lines of code have been run during the test run. So what this means is, and here is a spoiler alert, don't look, but what this means is that during the test run, I actually never ran this authorize function. And so what that can indicate to me is that this authorized function might work, might not.
[00:02:00] It might actually even have a very obvious error where it's trying to access a property that doesn't exist or anything. There's no way for our test to tell us that because this code is never run. And so that's what the code coverage report is trying to communicate to us.
[00:02:15] And then for statements here, we have this e next to the if, that means that alts cases is never covered for this statement. And we see that alt statements never run. Here, we see the if cases never run in this situation. So we are testing the update user function but we're not testing what happens if that user isn't authorized to update.
[00:02:37] And so this can give us a good idea of what areas of our code base are not tested. And what's more important than figuring out what areas of our code base are not tested is determining what use cases are not being tested? What used cases are you not supporting?
[00:02:54] Because maybe we have this, this educates here, and get users where there are no users. But that pretty much never ever happens. And so we only really care about this. If there are no users, and that's probably indicative of some other bigger problems. So we don't really care to take any time to cover that.
[00:03:13] I'm not suggesting that's a real world situation, but in general we should think critically about this code coverage. One mistake that I've seen teams make is and I've actually seen this more from management than software developers. But creating a mandate that you have to have 100% code coverage.
[00:03:35] That's a very bad idea for application development. Because what winds up happening is you have this curve of the value that the code coverage provides to you as you go up the code coverage percentage chain. So after a certain point that really kind of depends on your used cases and things, there´s a huge amount of diminishing returns.
[00:04:04] And certainly I feel pretty safe to say that in almost all applications, the 100% code coverage, you've long passed the point of diminishing returns. The last 10% of code coverage, the tests you have to write to get that last 10% are really finicky, hard to maintain. They have to do really weird hacks, you have to start changing your source code to expose certain hooks that only are useful for your tests.
[00:04:32] Certain things like that. So driving your tests by your code coverage exclusively, it's a bad idea. Another thing that the code coverage report doesn't tell us is will getting coverage here will give me the same boost as if I were to put my coverage work here? I do it one place, I do it in the other, I'm gonna get the same boost to my code coverage.
[00:05:00] They don't give me the same boost in my confidence or in what actually matters, so maybe getting coverage here isn't really a huge deal, but I'm still gonna get that boost in code coverage. So the code coverage report is just telling you, and this is what I'm really trying to get across.
[00:05:16] The code coverage report is only telling you what code has been ran during your test. It's not trying to make any suggestion on where you need to start writing tests, or what use cases you're missing necessarily. It's just telling you what use cases you're, or what code you're not actually running during your test.
[00:05:39] So does anybody have questions about code coverage, yes?
>> Speaker 2: So during lunch we we're talking about basically regularity of the tests. And finding the right balance between testing very fundamental assumptions in your codes versus testing the thing that has all kinds of dependencies. And in the process of testing at a higher level component, you're essentially testing whether all those subcomponents or those dependencies work.
>> Kent C. Dodds: Yeah.
>> Speaker 2: [INAUDIBLE]
>> Kent C. Dodds: That's applicable here, yeah. So that's a good question, so at what point do you test? I could test this authorize function by itself, I could expose it and then test it by itself. But it's actually maybe being used in the route for registration.
[00:06:33] And so should I test the authorized function in isolation, or should I just test the registration or the log in, and then I'll get that coverage. So you're gonna get coverage in both places. So I would suggest the basic principle for testing, as I said earlier, the more your tests resemble the way your software is used the more confidence they can give you.
[00:06:58] That principle shouldn't be treated as dogma either. There are trade-offs with if you consider, okay, what if I'm in a world where people are plentiful and time is plentiful as well? The best way to test your software would be to have people go through manually every flow in your software and make sure that everything is still working.
[00:07:26] The problem is, that takes too long. Humans are actually error prone and so you do wanna automate that. But yeah, so maybe automating that process of clicking through your whole app, that's what we call an end-to-end task. And those would be great but there are trade-offs with that as well.
[00:07:41] They take a long time, they're kinda flaky, finicky, and sometimes they can be pretty hard to maintain. There are resource incentives. So there are trade-offs at every level, and so the actual decision of where you focus your time and your tests. And whether or not we should test this in isolation, or if we should test it as part of an integration test or an end-to-end test.
[00:08:02] You need to develop an intuition about that, but just keep in mind the idea, the closer your tests resemble the way your software is used, the more confidence they'll give you. And so if you can, reasonably cover this code in a way that's closer to how the software is used then that's generally better.
[00:08:22] I'll talk a little bit more toward the end about this subject, I have some more specific guidance, but hopefully that kind of helps answer your question there. Yeah.
>> Speaker 3: So jumping off that, coverage tells you which functions have been touched during tests. Is there any tooling to many different ways to invoke a function you know, different parameters?
[00:08:52] Is there any way to, like kinda gauge how well each function is exercised?
>> Kent C. Dodds: Yeah, yeah, so what you're talking about is, I don't know if this is an official word for it, but I call it data coverage. I can call this authorized function or maybe a better example would be our sum function.
[00:09:13] So I can call the sum function with a number, what happens if I pass at nothing? What happens if I pass at a string? What happens if I pass at more arguments or whatever? And so as far as I'm aware there's no tool that can give you that kind of coverage.
[00:09:29] However, static type checking tools do have mechanisms for telling you how well your types are covering your code. And so those can help you with that. And it actually, by using a static type checker you kind of remove that category of concern from your application so you don't really need to worry about it.
[00:09:56] Good question. But that said you also have a situation where yes this accepts this string but what happens if it's a really really long string? Or something like that. You can't really get that from a type checker either. So at some point, having some sort of data coverage would be kind of cool, but I'm not aware.
[00:10:14] You could build it, that'd be awesome, yeah. That's a fun thing about software is we can solve our own problems. Is anyone curious to see how this code coverage report is generated? I can show you in just a really quick demo what things look like. So the Istanbul, the code coverage tool, I'm pretty sure it use to use some really weird regex stuff to put things in place.
[00:10:40] Now it's actually just a Babel plugin. It's very cool and there's actually a front end masters course where I show you how to make custom Babel plugins. I did that last year, it's pretty fun stuff. But this is a utils file from one of my open source projects.
[00:10:56] I copied this a while back so it might be a little different. Just some regular functions, we export all of these things. So here's how Istanbul keeps track of what lines are run, what functions are run, ternaries, all that stuff. It converts your beautiful, let's see, how long is this file?
[00:11:18] 309-line file to a 2945-line file. Most of that is taken up by this enormous object [COUGH] up here at the top. That has an entry for every single line, function, branch, everything. So here we have all the statements in a code. Here's the first statement. It starts on line 3, column 16, and ends on line 3 column 17.
[00:11:45] It uses that so it knows where to highlight things that aren't covered. Yeah?
>> Speaker 2: Clarify, is that gem, what we're starring at, that object, is that generated?
>> Kent C. Dodds: Yeah, and it happens in memory, you don't type this out. And yes, this happens in memory as well. So like you never actually see these files.
[00:12:08] I just made it for you because I care. So then we have the same thing for functions. We have a map of every function, what it's called, where the declaration starts and ends, where the whole function exists. And then we have branches. Those are if statements, ternaries, switch statements.
[00:12:27] This is where all those start. This one's interesting because you have multiple locations for these different branches. Often you'll have like the consequence and the alternate for like an if statement. And then we have a record of how many times that statement or that function or that branch is run.
[00:13:14] Now ID counter = ( increment this statement, and then 1. And the comma operator just says okay, whatever comes after this. It's like, ignore the first thing, this is what the evaluation of this expression should evaluate to. So that's kind of fine, and then we've got the same thing for ternaries, and for functions, and statements here.
[00:13:35] And it just kinda messes up our code in terrible ways, but in reliable ways so that our tests will operate the same, whether we're recording coverage or not. So anyway, the reason that I wanna show you this is just so you have an understanding of what's going on under the hood for generating code coverage.
[00:13:53] In addition to what the value of code coverage brings to your application. It does bring value but it shouldn't be taken as an indicator of something that it's not. It shouldn't be taken as an indicator of confidence, it should really be taken as an indicator of what lines have been run, what functions have been run, what happened.