Check out a free preview of the full Production-Grade Vue.js course:
The "Testing Q&A" Lesson is part of the full, Production-Grade Vue.js course featured in this preview video. Here's what you'd learn in this lesson:

Ben answers questions about writing tests for known bugs and testing with manual testers.

Get Unlimited Access Now

Transcript from the "Testing Q&A" Lesson

[00:00:00]
>> So the first question we have here is that should we write a test for every bug that shows up? Generally speaking, right, I think the easy answer would you say yes, of course, right. If it's a bug, then go ahead and just write a test for it.

[00:00:13] But I think the thing that's important right when we're talking about solutions that scale and not simply just patching things with a band aid Is understanding really at the core, like, what exactly is the bug entail, and figuring out what you're actually trying to test. Because if you're simply trying to patch like a UI bug, for example, that sort of leaves us at the risk of basically exposed to like, if it's caused by a deeper rooted issue, then you're basically testing the wrong thing.

[00:00:40] And so that's why I would basically caution against immediately saying, just write the test for the thing that broke and at least spending some time to do that investigation of like also like especially with end-to-end tests, is this worth doing like, is this worth adding to our build time?

[00:00:58] Because what you can do is like when you have error monitoring tools like century for example, you can allow it to then sort of at least track things so that if certain things really recur that often that you then can not like collect those error reports and basically write the tests for something that happened that often, but at the same time, it's that buggy, then probably we need to figure out like what caused it.

[00:01:20] And so what I'm trying to say here is that when it comes to writing tests, you really have to weigh the pros and cons of what it'll do to the build pipeline, because I don't know about you all, but I've been in plenty of apps where I submitted my pull requests and it took like 30 minutes for the test to run, for me to know if like my PR is even valid or not.

[00:01:43] And so when you have tasks that takes so long to run, you basically start to build distrust between the developers and test because we all know that especially at the end of the day when it comes to integrating test into our continuous integration pipeline, that if it takes a long time to load, and then you find out that, by the way, we need this hot fix, developers will skip that test immediately and then just like ship.

[00:02:06] Because the idea is that you're hoping that, that will be the thing that will fix it. And then at that point, you have bypassed the value of those tests. And so this is why when it comes to choosing what tests are right, speed is of utmost importance as far as like making sure that they still run quickly and that the time spent, like making those tests work is still valuable to the team.

[00:02:28] Because we have to remember too that like, while there is a single bug that we're trying to fix, the impact we have scales across the entire team, especially when we're talking production grade, enterprise level application, this means you are clogging it up for however many ex developers that are going to be using this.

[00:02:45] So if we realize our impacts right of what we're doing, then we can be more basically like intentional about where we're trying like how our efforts are best put forth. So the question here is around the sort of a lot of departments have, we have automated tests, right that developers write and then we have manual testers.

[00:03:04] And so if we assume that we've written that 20% credo effect where we're covering the key 80% that we want to focus on, like where might manual testers fall into that? And so in my experience manual testers are really good at thinking of things that pushing the edge cases on things.

[00:03:23] And so those are sometimes things like if you have like an entry field for the dollar amount that you're depositing into a payment app, that they actually try to do things like add special characters. They're trying to break it by going to negative numbers and seeing what they can do.

[00:03:42] And so in some ways, I would say that the value for the manual testers is almost like if you haven't heard like chaos engineering, their goal is to try to figure out ways to break it, right, like outsmart the developers in scenarios that, to be honest, most people will probably try to do and does it still work?

[00:03:58] So for example, if all you're relying on on the front end is to disable a button is let's say a simple disabled attribute, but you don't do any back end validation to submit that form. Like the manual testers are the ones who are gonna, this is where like the technical competency comes into play.

[00:04:14] They'll go in and I'm sure some of you have done this as a web developers, go into the source code, disable that and still try to submit it through. And sometimes that is that is helpful because we know that the form is working and it's just someone didn't like enable the button correctly.

[00:04:27] But these are like sort of the flaws that like manual testers and sort of think of when it comes to pushing the boundaries on what our tests might not cover. And in these cases we might not think of as developers.