Check out a free preview of the full Testing Web Apps with Cypress course

The "Wrapping Up" Lesson is part of the full, Testing Web Apps with Cypress course featured in this preview video. Here's what you'd learn in this lesson:

Steve wraps up the course answering audience questions about managing keys/secrets, automating a subset of tests within a test suite, reviewing Cypress Studio videos, and acceptable testing durations.

Preview
Close

Transcript from the "Wrapping Up" Lesson

[00:00:00]
>> Questions, comments, concerns, cause of outrage, deep meditations?
>> Maybe worth asking the question here for folks. But where have you found success managing secrets for CI/CD processes for things where you have to store usernames, passwords, credit cards, to do checkout flows, or stuff like that?
>> Yeah, I personally was fortunate enough not have to work on the checkout process part.

[00:00:26]
For us, all of it depends on the company size, right? I think at SendGrid we used Amazon key, something or other, AKS. At Twilio it was, I think, something bespoke. Right now with the parts in Temporal or API keys, it is usually just encrypted environment variables and stuff along those lines.

[00:00:52]
Those are usually features managed by the build processes themselves, right? Cypress is somewhat kinda agnostic to them and just kinda pulls them in from the environment variables, but yeah, I think making sure that those are encrypted. And again, I think it all at a certain scale, yeah, you have a team running it.

[00:01:10]
But I think if it's your open source project or something very small, Travis has them, I know that, GitHub Actions has them, Buildkite has them. Where you put in the environment variable, it's encrypted, you can never see it again, nobody can ever see it, you can only replace it.

[00:01:28]
And those are for the kind of simpler use cases, right? Obviously, when you get to enterprise scale, you're worrying about key rotation and stuff along those lines. And ideally automating some of that as well, so that individual teams are not doing it because we know what happens when individual teams have to do that.

[00:01:47]
They don't. So you have those things kind of like in place like that. The question was, is there a way to run a subset of the tests? I think that you can do a Matcher. But for the things that have changed, I don't really know the answer to that question, to be totally honest.

[00:02:05]
Because it's a tricky philosophical question, right? Every time we've thought about, we should totally have that cuz our test suite is running too long, right, ultimate we always decided, no, we don't want to do that, right? Because at what level of confidence do you trust a tool that is trying to guess what files changed and then what routes changed?

[00:02:33]
I know that it just has that for your unit test, and I don't even trust it for that. I trust it for a pre-push hook, but I don't trust it for our build process, right? Because the cost benefit isn't really great at any kind of scale. Do you know what I mean?

[00:02:55]
The cost of it not being correct for the benefit of the saved time. Pre-push hook, yeah, checks the stuff I changed. But while I'm waiting for a code review anyway, I definitely, again, something, something Selenium, have seen situations where the integration test suite is so long running that it's unbearable and it's slows down the team.

[00:03:20]
But with I think a lot of the site processing in a lot of cases, I traditionally find that the turnaround time for code review is longer than it takes to run the pipeline, right? I think when that changes, then maybe it's time to think about that kind of stuff.

[00:03:42]
But I always ask myself that question, and then I never go through with it, because of those reasons of, am I comfortable with getting even slightly wrong? Yep?
>> On that note, especially as we're getting things like videos and things dumped back, do you have any experience or knowledge of team reviews?

[00:04:02]
It would be analogous to a code review, right? But we'd be actually reviewing the test. Testing results is part of the code improvement.
>> Yeah, that's interesting. I think it's a really great idea, right? I think about how it's always happened for me, which is when the complaining gets high, [LAUGH] right?

[00:04:24]
The builds are always failing. And then you have that meeting and then you have it, right? But I like the idea of it as a purposeful mechanism rather than the complaint cuz I don't know how about you. Somebody suggests, we should turn off the tests [LAUGH] all right?

[00:04:44]
And they have reactions. I've been there for that conversation. Someone goes, I think we need to turn off the test. And that's usually when those conversations happen. By that point, it's gotten like we're talking about the length of the test, the flakiness, cuz it's usually not the tests, right?

[00:05:03]
It's usually some other like, okay, how are we simulating the environments, are we running too many end-to-end tests? Did my test fail cuz I'm running it in staging and another team broke their own API in staging, and now my tests are failing? Right, that's where you kinda hear my voice fall if someone's mocking at the APIs as the right choice.

[00:05:25]
Cuz I have been burned by a team who didn't even ship those changes. They were trying them on staging, they broke their thing. Our tests are running against their staging APIs and staging environment. Our tests are breaking, we don't know why, we think our tests are flaky. People want to turn off the test, right?

[00:05:41]
And that's the kind of interesting piece around all of that. But I like the idea of, we know these are common things, right? Again, with the Cypress dashboard, there's ability to see what the flaky tests are, right? Having a periodic review or almost a postmortem of sorts to kind of figure what the bottom of the issue is, I think it's super interesting idea.

[00:06:00]
>> Someone asked about the longest test suite you've seen and what would the longest acceptable be, in your opinion?
>> I've think I've seen ones that run 24 hours. [LAUGH] All right.
>> Weekly build. [LAUGH]
>> Yeah, right, and I mean some things, like you can see a world where you do a full integration test, right?

[00:06:22]
What we typically do for that is we will run it on a cron, right? I think other than that, on the kind of PR level, when the tests are starting to take longer than a code. When it is the blocker for you shipping, I think it's when you're dancing on the threshold of acceptability.

[00:06:39]
But you can imagine a world where you have these more in-depth tests where you don't necessarily need them running on every change, but maybe once a night, right, or something along those lines. So that if it's kind of a major release or something like that, that you're catching it before it gets really bad.

[00:06:55]
Cuz you're running or staging or something like that. That will kind of get you most of the benefit, I think, early on.
>> That's where the benefit of something like suites would be helpful. Where you have the PR suite, that's the gut check. And then the staging suite, that's like, I don't want it in front of customers until it does [CROSSTALK].

[00:07:13]
>> Yeah, exactly. Cool, thank you all so much.
>> [APPLAUSE]

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now