Check out a free preview of the full Enterprise UI Development: Testing & Code Quality course

The "Automated Accessibility Testing with Ax" Lesson is part of the full, Enterprise UI Development: Testing & Code Quality course featured in this preview video. Here's what you'd learn in this lesson:

Steve discusses the importance of automating accessibility tests and introduces the aXe testing library. Accessibility violations will cause tests to fail and a descriptive list of ways to solve the violation is presented in the console.

Preview
Close

Transcript from the "Automated Accessibility Testing with Ax" Lesson

[00:00:00]
>> Let's talk a little bit about accessibility. And I think I mentioned this earlier in the course, which is, my team we care a lot about accessibility, and we thought we were doing the right thing, we didn't realize that our build process was broken. Broken is a strong word.

[00:00:18]
It was issuing warnings, overwhelmingly to our credit, mostly accessible, it didn't take a lot to get it into shape. But we had lint issues around the accessibility stuff that were just swallowed cuz they were erroring into the console and weren't breaking the build, right? And this is one of those things which is like, we joke that the build process is supposed to stop you on a Friday afternoon when you're lazy, but even in well-meaning situations, it just didn't know, right?

[00:00:48]
Cuz you just didn't think to run, from the command line, a build process that broke would have caught it, right? And so you can do it, there's lots of different ways to do it. We have in a Svelte app, there's this thing called Svelte check which will tell you, but you can also have tests, right?

[00:01:05]
And the tests are super interesting cuz you could also seek to have some nuance around what exactly is going on in this case, right? Or if we're trying to get certain ones and work through it over time, and you could start them all with TODOs for every component and start to uncheck them over time.

[00:01:21]
I think you get a little more granularity with the kinda programming model that comes with unit tests versus some other things, but it all kinda works. But the cool thing, what I'm gonna show you here, and this goes to this maintainability over time. And so you don't have this thing where all of a sudden, some big government agency wants to use your product and they won't because you're not accessible.

[00:01:39]
You can basically say, hey, our tests will fail, yes, you should still go through with a screen reader. Sure, sure, sure. But our test will fail if we have known violations, right? That is one of those things, automated nature of all these things keep your code base healthy over time.

[00:01:56]
There is a library, you've probably heard it before, called aXe, it's a-X-e. And what you can do with that is basically it'll audit the known checks in just the way you wrote your code.
>> You said aXe was an npm package?
>> aXe is an npm package.

[00:02:15]
It is also a browser extension. It has lots of things that you can do to kinda do static analysis of your code. In this case-
>> [INAUDIBLE]
>> What's that?
>> You mean aXe core?
>> aXe core.
>> [LAUGH] I couldn't find aXe, so.
>> Yeah, that was probably taken.

[00:02:31]
In this case we'll use just aXe because we're building a test suite. Again, this is one option, we're currently in the point of component testing, so I felt like we can do it in here. Could you do it from your browser based test? You could. Could you do it just as a tool that does a static analysis of the entire code base?

[00:02:46]
You also could. Those are a little bit trickier because the Chrome extension, for instance, you gotta be using your browser. The stack analysis a lot of time is an all or nothing or you have to put comments here. Like I kinda said in the preamble to this, if you're just doing the kind of like, hey, we just found out we gotta do this, you can start by layering the tests with a to-do and start to slowly get your code base over time where you need to go.

[00:03:10]
What strategy is right for you depends on the code base, but let's look at one common way to deal with this, and then you can kind of apply it as need be. The kind of bonus exercise that we kinda just briefly talked about the other day was this ability to, if you look at the browser here, I've got just a silly, all the inputs, and they change and stuff like that.

[00:03:38]
Everything is great. And what we want to do is make sure that there are no obvious violations there. And this is super easy to do. And what we wanna have happen is mount the component and check to see, do we have any violations, right? And so for instance, I've got this ObstacleCourse.

[00:04:02]
We'll kinda do the normal, And I think where this is also super powerful, especially for a component test, is if you're building a design system for instance, right, where you've got these individual components, right? You wanna run these checks. A lot of times you're working on a design system, sure or maybe you have a storybook, right, or something like that.

[00:04:26]
But you don't really have an app, right? You have a collection of pieces of user interface and different things. And so the idea that you could have the entire suite of the things and run through them all, render them, and do the accessibility check, have tests that break on every single build, mean that you're gonna ship a design system that is accessible.

[00:04:43]
So let's go and we'll pull that in. And we've got this render that we've been using the whole time. And we'll pull in the ObstacleCourse. Like I said, it's just a component in this case. One of the cool things, and again, we're using vitest, but jest-axe will work just fine.

[00:05:02]
There's a plugin called jest-axe that we can use, and we'll pull that in as well. So jest-axe. And it only has two things that exports. One is, axe, which will run the analysis on the component. And then there is a helpful matcher in this case that we can use with expect to have no violations, right?

[00:05:35]
I think that the same way that jest-dom has that slash extend expect, or expect extend, whichever one. I did the wrong one the first time, I switched it and I'll never remember because you do it once in a code base there the entire time. We can go peek at some point, literally, in setup actually.

[00:05:53]
But you can also do it globally as well, but depending on what you have, we'll just do it the old fashioned way. Expect, and then we can extend that. And I point this out cuz also if there are common things that you want better than aXe in this world, you could write your own additional matchers that are unique to your code base as well.

[00:06:14]
There's something you've commonly find yourself doing given the data structure that you have or something. It's really hard to invent fake versions of why you would need it. But if there's some repeated thing you wanna actually have expect and then some very unique to your application, you can absolutely do that.

[00:06:29]
And we'll give it the toHaveNoViolations, right? So now that is available as well. Render has a lot of things, when we were playing with the types. You can actually get the container, you can get the actual result, the base element, so on and so forth. In this case, we'll grab the container.

[00:06:50]
I could also probably grab the entire screen too, now that I think about it, but we haven't done this one yet. So let's take a look, countainer. I love TypeScript. The fact that I got red squiggly line and I didn't find out though that was undefined later is the best thing.

[00:07:06]
The number of times that has saved me from just making a silly mistake has been super helpful. And then the results is axe returns a promise. I gotta make this an async function. And so we can basically take this DOM really, and pass it in and say cool, I go ahead and, Do your analysis.

[00:07:36]
Where have I made boo boos, right? And this one, I think is actually fine, much to my surprise, but we'll go and create an issue in a second as well. And then basically, all you need to do at this point, toHaveNoViolations, maybe we'll even spell result right. And we won't put a dot at the end, right?

[00:08:03]
And now I should be able to run this test. And let's see, [SOUND] expected. No, wait. This is why you live code in front of people, cuz then they catch errors for you. It's like pair programming but even bigger. So in this case we have no violations. So let's go create one just for funsies.

[00:08:27]
[LAUGH] Different versions of fun for different people, I guess.
>> What defines a violation for this specific-
>> Anything that breaks one of the heuristics. I'm going to do the one I accidentally do all the time, [LAUGH] which is input with no label or forgetting the HTML "for" or something along those lines, right?

[00:08:47]
That would be like, hey, the screen reader is not gonna be able to read this.
>> Baseline, I don't know if you can configure what rule set you can use.
>> I don't know cuz I usually use the standard one. I would imagine you should be able to- [CROSSTALK]

[00:08:59]
>> Uses one AA versus AAA.
>> Yeah, AAA is I think the one we're aiming for.
>> Yeah.
>> This will get you most of the way there, you should still be testing with a screen reader and stuff like that as well. Cool, and so let's go ahead, let's just go in to the index.tsx, and we'll go and we'll just like, I don't know, we'll give ourselves a bonus.

[00:09:21]
You know what? Go ahead and I was like, yeah, I'll just make an input field where you'll type, I'll have it on change. Everything's great, life will go on. Code, code, code, code, moving along, right? We can go in here, we'll run that test again. Do this save and we'll grab the right one.

[00:09:48]
We'll see in a second. It's interesting that that one passes for reasons I don't totally understand. Let's toss it and put it in there. All right, so that one, for some reason. I have a label.
>> You just put that back.
>> I put it back. But here we have an input field where the browser couldn't figure it out.

[00:10:24]
It could have been the form tag or maybe we were in a form tag, unclear exactly why that one did work, did pass the test. But in this case, just having one, that didn't have one, got me to the point where it gives you basically here where your issue is.

[00:10:39]
And you could see there's a long list of things, I wonder if I accidentally appeased one of these in the implementation. But the nice part is, if you don't have tooling like this, it becomes an issue of hoping that someone catches that one input field in a code review.

[00:10:58]
And if you are a good person and you have tiny little PRs that are hyper focused on one thing, then maybe that code reviews will work for you. If you have that one where you're apologizing to your teammates during stand up, for the size of the PR cuz it's 47 files and 3000 lines of code, the chance that you're gonna miss one of these is pretty strong.

[00:11:22]
The best part about this is because it's built into a unit test process. We already have that build script. We'll break the build. You get immediate feedback, right? And we'll look at ways that you can get immediate feedback before you even get to the build process a little bit later.

[00:11:35]
But the idea is we're building the systems that help us make good choices, because like I said, I have a team that deeply cares about this, and we messed up cuz I didn't even realize. Yeah, that issue was like, the passing doesn't necessarily mean that things are good, it just means that we weren't blowing up the build because of an accident.

[00:11:57]
Lots of things were blowing up the build, just not that. And so kind of verifying these things and building these systems, we'll never have the issue again, because the build will break if it ever comes up again. And that will help us to kind of get to the point where we don't drift too far off the course.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now