Lesson Description

The "Linting Rules" Lesson is part of the full, Cursor & Claude Code: Professional AI Setup course featured in this preview video. Here's what you'd learn in this lesson:

Steve shares some lint rules that add guardrails for agents as they write code. Leveraging existing tools like ESLint can lead to higher-quality results and less refactoring for AI code contributions. Some rules may be too strict for teams to follow, but they can be used temporarily with agents.

Preview
Close

Transcript from the "Linting Rules" Lesson

[00:00:00]
>> Steve Kinney: There isn't a lot of new ground to cover on, if you have good tests, you catch regressions. There's nothing like AI agent specific, there is more you were playing fast and loose the tests before because you trusted yourself and your team. And honestly, there are parts of the code base, even in the last project I worked on, that weren't particularly well tested because I wrote them four years ago when I was by myself and was sloppy.

[00:00:31]
But the code never changed and they work, you know what I mean?, and you knew that no one was going to touch those things. And when you have an AI agent, you don't know that anymore, and so, yes, you do need to be good about your test coverage, so on and so forth.

[00:00:51]
And honestly, if as long as you're willing to read the test, have it help you with that, there's no glamour in like writing a bunch of unit tests for code that's already there, do it, it's important, but it's not fun. And it's probably both a product manager and an engineering manager who are not happy that you want to spend a month doing that so you can have AI write code for you.

[00:01:14]
Got it, I live in the real world too, I was an engineering manager, no, I was always down to do those things, it's not important. So have good tests, commit early and often, but there's not a lot of new ground to cover there. I think this part was interesting to me because I kind of joked about this before.

[00:01:35]
There are rules that seem good to install and then there would be a revolution on your team. If you tried to install something, if nobody could get, if no one could pass CICD because of some of these rules, I got it right. And in so far that there are still humans contributing to your code base a lot of the time, some of these might still be off limits.

[00:02:04]
But I have the unique perspective right now as somebody who is just by myself for the first time in like years. Like, until like four years ago, I was starting at temporal when I was by myself, it's the first time in four years where I can do whatever I want.

[00:02:18]
These are ones that have particularly served me for the purpose of using a lot of these, like AI coding tools. Because they have reduced the chaos when cursor rules and we'll see Claude MD in a little bit, weren't doing the trick. So some of these, I'm just gonna kind of show them to you because, like, I don't know, reading them is not that much fun.

[00:02:46]
But I'll explain to you is there is a term called cyclomatic complexity, which is basically like how many loops and nested things are in your code, each one of those counts. And again, you probably couldn't turn this on your code base if you wanted to, in fact, I actually can't turn it on my code base either I have it set to warn.

[00:03:11]
And the number's a lot higher, but I hammed it up a little bit for the slide. But you could either greenfield, or you can actually set it for a little bit and let the LLM have to deal with it in a particular file or something. You can begin to use these, it's less about having the rule on all the time, it's about using this as the feedback loop.

[00:03:31]
And ideally, if you can get around all the time, that would be great, but like how you know how much nested loops and conditionals can you have. Like max depth is kind of similar, how many lines can you have per file? My numbers are all actually a little higher than this, I hammed them down to my aspirational ones because like, this is an aspirational talk, they're all a little higher than this.

[00:03:53]
But if we know that they only read the first 250 lines of a file, you know, not letting the LLM create files that are longer than that could possibly be useful now maybe. Because context window, maybe the first 250 lines is all I should have read and decided if it needs to read more, but now you split it to many files that it will read.

[00:04:23]
That's actually back to this, it's like where the art meets the science, which is like playing around with that, I'm not saying take these rules, these are like the rules you should use. I'm saying like, well, there's some trade offs here, so lines for file, max lines per function, so on, and some of this is like those last two are less about what's good for LLMs.

[00:04:45]
That's more like what's good for the human reading them later, cause what I've noticed for that max params 1, like no more than 3 params is what it'll do is it'll use one of those options objects at some point. Which is good, I'd rather have that and so like that is like it actually does a good pattern to get around that.

[00:05:04]
And so having some of these like just general size and stuff rules that are in place because what happens if you say like, hey, you need to run lint after each and every one of these. Or you have a pre commit hook, you can start to be like you're not successful and it will take that and it will like I had to do some controlling to get it to stop touching my lint rules, but I did it.

[00:05:24]
It doesn't do that anymore, sometimes I want it to and it doesn't, getting those things in place I think works really well. And then this one, because I am neurotic, there's this one called Unicorn which has a bunch of really great stuff, there's also another eslint library called Perfectionist.

[00:05:43]
They are for people like me, which is I want my file names to be the same, and so now I get an eslint rule and I don't have to remind it all the time. And at a certain size to the code base, even having the cursor rule, it will eventually get the hint, but then one out of every 100 times mess up.

[00:06:02]
And so this will catch it and I don't have to shove it in the rules all the time, so there are a lot of these rules, there are some again, like import order is a hard sell on a team. That cyclematic complexity on an existing code base is a hard sell, but you could use tools like Linstage will only it will allow you when you go to a git commit, do only the ones that have changed in this git commit.

[00:06:25]
So hey, and if you have that with an environment variable just for the LLM, it's like if you touch this file, you need to clean this up, you can play with it a little bit. You could just turn it on for a little bit and then otherwise have the normal rules, so on and so forth.

[00:06:41]
You can use it to do the refactors like, hey, I'm gonna turn this on for a minute. I'm gonna send you into this one directory to pay down some tech debt and then I'm gonna turn it back off so my team doesn't kill me. All fine options, but I think like either I am totally wrong or like no one else is really thinking about just using the normal tools to reinforce these things.

[00:07:04]
Cause I've had wildly great success with it or everyone else is doing it, I'm just late to the party, that's possible too. But I've had wildly great success with it.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now