Check out a free preview of the full Enterprise Web App Accessibility (feat. React) course

The "Test Automation Guidelines" Lesson is part of the full, Enterprise Web App Accessibility (feat. React) course featured in this preview video. Here's what you'd learn in this lesson:

Marcy discusses the importance of automation in testing for accessibility in software development. They explain that while automation can help with certain aspects of testing, there are still limitations and the need for manual testing. The instructor also provides examples of what can be automated and what still requires manual testing in terms of accessibility. They emphasize the importance of creating your own test coverage and ensuring that automated tests align with real browser behavior.

Preview
Close

Transcript from the "Test Automation Guidelines" Lesson

[00:00:00]
>> Testing is awesome, can also be kind of hard. Automation, especially for accessibility, is an important aspect of building quality software. Like if you have a test suite with any automated tests in it, or even if you don't, you have some opportunities to let computers help test things for you.

[00:00:17]
Things that can be programmatically determined, computers can help you with that. There are some things that they can't tell, they're not that great at and we still need human review. Like we still need manual testing. But if you could bake in a contract for how something should work, your teammates, even you in the future, you come back in January, like how did that work again, or whenever after a long break.

[00:00:42]
It would be more stable and less inclined to break without anyone noticing if you have some test coverage. But it can be kind of hard. Sometimes, the tooling doesn't support what you're trying to test, and maybe you're locked down on an old version of a library that happened to me two weeks ago, I went to go update user event and testing library.

[00:01:03]
I'm like, la-di-da, here you go, team. And then I understood why people aren't as eager to do that. It's because you have to make everything work in all these other branches. You might not be able to go update all the tests to make them work with this new upgrade.

[00:01:18]
So, there's the things that go into upgrading tooling that are challenging. You have to schedule them sometimes. So that's why this can be a bit hard, but there's a lot you can do with some basic tools as well. There is a misconception that everything in accessibility can be automated.

[00:01:36]
I've heard executives want to cut their team staff size, especially when it comes to training for accessibility because they think that automation can solve it all. Sorry, I'm afraid that it can't. We need humans to review stuff that humans are going to use. Kind of intuitive. So here are the things that we can automate cuz we do want computers to help us plus we want to make our jobs easier.

[00:02:03]
So we can automate html and aria validation like typescript is great for this like we were talking about it can catch when you have do something wrong with your markup. Form labels, we can catch that, we can catch color contrast most of the time, I do have in the manual column contrast over images and gradients, a bit harder to check.

[00:02:25]
Text over video, yeah good luck. That's very manual to test. Accessible names, so like icon buttons, you know you have an icon button component, they have a prop for you to pass text. Text in, like does it land in the right place internally in the component? Focus management for keyboard stuff, those are some of my favorite tests to write because you can assert did focus move where you expected and the tools have gotten better for that in the last few years.

[00:02:55]
You can also test the document language in a page level test like like using Cyprus. As for the manual stuff, you can't really test focus order. If it's jumping around in a weird order, what's the computer gonna say about that? Text alternative quality. So if you have the wrong alt text on something, or yeah, it's just not right.

[00:03:19]
The tooling can tell you when all text is missing, it can't really tell you if it's not good. Screen reader testing, we can't automate that, we can run things in the cloud with the system labs but as for assertions with screen readers, we're not quite there yet. That'd be cool to see more of, I know there's some experiments around that but nothing mainstream.

[00:03:41]
Contrast, we talked about error identification, if you have the wrong content of your errors in validation. Computer can't really help you there. And then click events on divs. The computer doesn't know what it doesn't know about. It's kind of like a blind person. It won't know. So as for estimates of how much of what CAG we can automate, it's around 50% of issues by volume.

[00:04:04]
Sometimes estimates are higher, but I think it's yeah, it's not always consistent. So it's by volume, like the number of issues that you could possibly have on a page. It's not my success criterion. And that's only from DQ, that's their estimate. And I think I chose the lower end of the number.

[00:04:22]
But all of their rule descriptions for XCOR I have linked so you can go see what they cover. And they are also very involved with the standard side of accessibility rules for automation, the ACT Rules Group, which stands for Accessibility Confirm Performance testing. So just trying to make this stuff repeatable, more standard, so that different tools aren't so wildly different in what they report.

[00:04:48]
So that's pretty cool. But we definitely still need some manual testing. And I have an article in here by Eric Bailey on Smashing Magazine. He does a really eloquent job of explaining why we still need manual testing, so check that out. So create your own coverage, I think some of the best stuff you can do for automation is to write feature tests for the components that you know a lot about.

[00:05:15]
You know how that component should work. You've got the requirements specs. You've got insider knowledge on how it was coded. So if you can write tests that assert the outcomes, does my focus go to the right place. I'm not going to test the internal logic of it. I'm going to test, does it have the right outcome?

[00:05:34]
Write inputs, create the right outputs. Keyboard tests are great for that. You can test that kind of thing. And they're fun to do with tools like testing library and gest. It can be tricky to capture exactly how the browser works. So make sure when you're writing these tests, compare your tests with how the browser works.

[00:05:55]
Because if it's passing the automated test, but it doesn't work as in the browser, your test isn't effective. That would be what I call a tautological test. It's kind of not testing anything useful. But it's really hard to tell. So you have to check it in real browser.

[00:06:09]
Like, say I can capture the test failure and do test-driven development style like TDD. I can capture the failure, I think I go and fix it, I'm gonna go and check manually just to make sure, like as part of my automated test process. So that when I check that code in, I know that it actually could reproduce that bug and it's fixed now.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now