Web App Testing & Tools

The Role of QA and AI in Testing

Miško Hevery

Miško Hevery

Qwik Creator (Previously Angular)
Web App Testing & Tools

Check out a free preview of the full Web App Testing & Tools course

The "The Role of QA and AI in Testing" Lesson is part of the full, Web App Testing & Tools course featured in this preview video. Here's what you'd learn in this lesson:

Miško answers audience questions related to the role Quality Assurance has in the testing lifecycle and how AI can facilitate testing. Traditional QA roles were responsible for application testing, whereas modern QA engineers helped define the domain-specific language for testing and the infrastructure for executing the tests.

Preview
Close

Transcript from the "The Role of QA and AI in Testing" Lesson

[00:00:00]
>> I'm curious if you have thoughts that you can share on where or if you think QA fits into the software development life cycle? Is there a strong need for QA, do you think?
>> Yeah, so that's a good question. So QA is kind of, you can think of it in a lot of different ways.

[00:00:17]
The old way of doing QA, which literally was like they fully test everything, or the new way of QA, which kind of helps the developers make assertion frameworks. So, in order to have this freedom to do all of these things, you have to set up your environment in the right way.

[00:00:34]
Some of things are gonna be pretty obvious and we are gonna talk about it, other things are gonna be more complicated. Like, how do you set up UI testing, right? It's kind of in this semi-valid excuse. That gets complicated. And so a place where QA can help is to create kind of a testing framework which we call pages, into the system so that you create a good DSL, good domain specific language, for the developers to express themselves, right?

[00:01:02]
So that's one thing. The other thing a QA can do is you as a developer produce the system, hopefully in a testable way, and then over time issues are discovered. And so when a particular issue is discovered, you should always create a regression test saying, hey, this is an issue that we missed.

[00:01:24]
And this is another useful thing to do for a QA, which is, hey, using these domain-specific languages, can I recreate this particular scenario? And it is so much helpful, then, to just give the scenario over to the developer and be, hey, this needs to be fixed. The other thing that's useful is oftentimes scenarios are complicated.

[00:01:46]
And a lot of it is just randomly like, hey, this is a complicated scenario that I've recreated. It's gonna be super difficult for me to debug or figure out what's going on under it, let me simplify it, right? Let's just randomly start deleting portions of the code until the test no longer fails, right?

[00:02:05]
And what you're doing over there is you, without having any knowledge about how the system works or knowledge about why the test is failing, you get to whittle down the scenario and the simplest possible thing. And those are all useful things that you can do. Now, it does mean that the QA has to be more technical, right?

[00:02:25]
It's not the old style, just throw a bunch of people and have them click around, which is by the way a boring job that nobody wants to do anyways, right? You wanna just automate it so that it's both faster, you get better results. And I think the role of a QA, it becomes more of how do I build up more infrastructure to enable more expressive tests, right?

[00:02:53]
Because one of the things that we're gonna talk about is you wanna have your tests to be easy to read. Because remember, we said it's a form of documentation. So when you come to a test, it should be obvious what scenario you are testing. And I oftentimes see these super complicated tests where you're like, what in the world are you trying to tell me, right?

[00:03:11]
How am I supposed to reason about this particular thing? And people just don't spend enough time either whittling down the test to the bare minimum, or giving proper names to things so that it creates a nice, useful story.
>> Someone asked, is testing language dependent?
>> Is testing language dependent?

[00:03:29]
It's a good question. Mostly I would say no. Whatever is a good testing practice in any one language should also be true in other languages. Now it is true that some languages make this easier than others, because, for example, JavaScript is duck-typed, right? If it walks like a duck, walks like a duck, it's a duck.

[00:03:50]
It makes it easier to create mocks and hack things around inside of your code base. Not necessarily that you should do that, but it is a tool that you have that you might not have in other languages. So for example, if you are in C, which is everything strongly typed, if somebody doesn't give you an interface, it makes it really difficult, if not impossible, to do anything about it.

[00:04:16]
Whereas if you're in JavaScript and if somebody doesn't give you an interface, if you're creative enough, maybe you can hack the require or the import system or something like that to get what you want. Is it the best way of testing? No, but it does give you slightly more freedom in kind of the corner cases.

[00:04:34]
But I still would say, hey, the best thing to do is to just design your code base to be testable to begin with.
>> What's the role of AI in testing? Do you think it could replace it?
>> That is an excellent question. So let's take a little bit of a diversion here.

[00:04:47]
So the role of AI, I remember about 15, 20 years ago, there was a company that was trying to write a piece of software. I don't think they succeeded at this. And I forgot the name of the company. But the idea was that you as a developer write the application and then the software will generate the tests.

[00:05:08]
And I think that's fundamentally flawed because you can replace that company or the software with AI. It's the same exact thing. The reason I think it's fundamentally flawed is because I think you have it backwards, right? What you're saying is like, here is a real working system, try to figure out what the requirements were.

[00:05:31]
That's not how people work, it's the other way around. Let me give you requirements you tell me what the system is. So I actually think that the role of AI would be useful and it's actually reversed in the sense that as a developer, what I really wanna do is write a whole bunch of tests showing you, hey, let me demonstrate you, I want the software that does this, this, this, and this.

[00:05:52]
And let the AI or some tool automatically generate the actual behavior, right? So you want to do it backwards. But because historically we've been the ones writing the code, we're coming at it from the angle of, here's the code base, go generate some tests for you because I hear tests are a good idea, right?

[00:06:10]
But that completely misses the quality of it because testing is about having a software that's well designed. And if the tool or the AI is unable to change your design, then there's very limited kind of quality of the test that you can produce. But when you flip it around, you create tests first, and you let the AI produce the output, I think that's actually viable and workable things.

[00:06:35]
And I think we're getting there, because when you have the co-pilot, it is remarkable sometimes what the co-pilot produces. Like there are moments where I'm, how in the world did you figure this out? But I'm, I'm gonna go with it cuz I think it's actually the right thing.

[00:06:52]
The other thing I wanna point out is if your code is not testable, the kind of test that you end up with are tests that are hard to read. Because you are desperately trying to work around the limitations of a code base that doesn't wanna be tested. But if you flip it around and you say, I'm gonna write the test first, nobody in their right mind will write a hard-to-read test, right, that's not how it works, right?

[00:07:16]
You start with a simple to read simple scenario test and then you're producing the output, right? So, if you start test first, or this is what people talk about test-driven development, of course, your test will be pretty and simple and easy to read and easy to reason about because, well, that's the path of least resistance.

[00:07:37]
You write a simple test case, and then you write the code, right? You don't write the code and then scratch your head, how in the world am I gonna test this thing and then end up with some kludgy, complicated thing that's difficult to reason about, right? So there's a very convoluted way of saying is that I think historically people just have the wrong thing in their heads.

[00:07:59]
They assume that it's all about the code and the tests are secondary, whereas I really would like to flip this in your head and say, no, no, no, the tests are the driving forces here and the code is the secondary part. Yes, we want code, and at the end of the day the code is the thing that will do and solve and build the application that we want.

[00:08:21]
But think of the tests as the requirements, the description, the documentation, the thing that keeps your team together, the way of explaining how the system works to your teammates. All of those things is what tests are.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now