Lesson Description
The "Setup" Lesson is part of the full, Build a Fullstack Next.js App, v4 course featured in this preview video. Here's what you'd learn in this lesson:
Brian walks through his development setup and discusses how to balance AI help with real problem-solving to build debugging skills. He also emphasizes using tools that support learning without replacing the effort needed to truly understand the code.
Transcript from the "Setup" Lesson
[00:00:00]
>> Brian Holt: So before I do it, I always get these questions, so I just have a little section here of what my setup is so people don't constantly tweet at me like, "What theme are you using?" This comes from experience. I have a lot of tweets saying, "Brian, what theme are you using?" and it's like, it's the default VS Code one. That's never changed. It's always been that. So we'll be using a lot of VS Code today.
[00:00:23]
I used to work on it, so I signed a blood oath with Satya that I have to continue using it. I will, no Cursor today, but I do use Cursor quite a bit. I'll be on Firefox, and I started using Ghostty, which is pretty cool. Oh, that's new. Wow, that's cool. It's just a terminal editor. Normally I just use terminal app, but Ghostty is the new one that I have started using. Dark Plus theme, MonaLisa font—they noticed that I've been shouting them out for years now, so they actually gave me a code to give all of you.
[00:01:03]
I get no kickback on this. I just like MonaLisa. So there is a code there to get 10% off of the font. I do have ligatures enabled, which is like if you put two equals together, it makes a long equal sign. I feel like most people have seen that by now in their editors, but I just always call that out. If you want a free font and you don't want to pay for MonaLisa, Cascadia Code from Microsoft is great.
[00:01:28]
And then all my cool icons on the side are VS Code. I use ESH. I use whatever the default theme for Ghostty is—it looks like Dracula, but I think it's something else, to be honest with you. Starship prompt, so like if I pull up Ghostty here, you can see this little part right there where it has emojis and stuff like that, that comes from Starship. And Ghostty ships with the nerd font, so if you're using Ghostty, you don't have to worry about the nerd font.
[00:02:02]
But like, see if I can show you. I'll make this much bigger so you can see it. Let's see. So you can see all of this stuff, like the Git main here. Let's even go into one of the steps where it shows the Node logo. Those aren't in a font normally, right? So you need a special font that has the additional glyphs that are made for this. That is called a nerd font. I didn't make it up; that's just what it's called.
[00:02:32]
So if you need one, they have a fork of Cascadia Code called Cascadia Cove Nerd Font. Okay. I would invite—I've actually seen this to be pretty effective for students—to just use an AI assistant with this. So what I've done here for you is this entire course. I output it to just one large text file that you can just paste into your ChatGPT, Claude, or whatever you're using here. And this will give you all the context of what this course is about and something to reference.
[00:03:09]
And then you can ask questions like, "Hey, Brian did this. I don't really get it. Can you please help me with this?" Please don't have it write code for you, because I think that's robbing you of an opportunity. But asking questions, like having a personal TA there, I think is helpful. And I do this while I'm writing the course, right? Like, "Hey, this is broken. What can I do to fix this?" Likewise, Next has this as well.
[00:03:35]
There's—like you can see, I'm scrolling here—and it is extremely long, so you probably can't output all this. I think I counted there's 77,000 lines of text in here, so that's too big of a context window for Claude, at least. So you'll probably have to copy and paste pieces that you need here. But I really like this LLMs-full.txt pattern. A lot of sites have started doing that as well. So again, this is exactly what I did as well.
[00:04:08]
Also, the Context7 MCP server really helps as well. That's an MCP server that pulls pieces of docs so that you're always getting up-to-date docs. Like, I don't know if you're ever—whenever you're using an AI and it writes code for you, but it's really outdated code, it's using 3 versions too old or something like that. This kind of helps alleviate that problem. So yes, I'm all about let's learn this in a college course.
[00:04:34]
I'm not trying to gauge you from a passing grade or something like that. What I want you to do is maximize your learning. And if maximizing your learning is looking at the completed code or asking Claude more about these things, do that. I don't really care how we get to the learning process; just please get to the learning process. At the same time, I think there's value in struggle—like doing something, seeing it break, trying again, seeing it break again, trying again.
[00:05:05]
So just try and find that balance there. Don't cheat yourself. So what do you think about the fact that we're sometimes banging against prompts and not actually learning versus back in the day—we would bang, like 5 years ago, bang against code and actually learn? What's your take on that? Like, there's kind of two answers to that. There's the learning process where if you're literally just saying "fix it, fix it, fix it," you're obviously not learning anything.
[00:05:35]
And if you don't know what it's fixing and it's a topic that you're trying to learn about, you're doing yourself a pretty big disservice. Like, if something's broken and the LLM is not figuring it out and you're just saying "fix it," you're robbing yourself of an opportunity to go dig in there—even just getting some debugging skills, even if it's not just learning whatever is broken. So I think that's doing yourself a disservice.
[00:06:00]
From a professional perspective, I actually ship code. And just sitting there and saying "fix it" and watching it spin—at some point, you cross the threshold of "I've spent more time telling it to fix it and then waiting than I would have just spent Googling it, trying to figure it out, and then fixing it myself." I usually give myself, I don't know, one, maybe two "fix its" before I'm like, "Okay, this is probably—whatever it's going to arrive to, it's going to be a weird solution to my problem," which definitely happens.
[00:06:39]
Like, generally speaking, I am responsible for all code that I ship under my GitHub name. And so if I don't know what the LLM is doing, there's a huge problem there, right? So there's a huge temptation to just kind of foist this complexity onto the LLM, and I think we're taking it too far. So yeah, yeah, do your job, I guess that's what I'm trying to say. I mean, I find myself doing it too. I was like, "I don't want to solve this.
[00:07:06]
Like, I don't want to muster the mental capacity to try and figure out why this stupid bug is happening," especially when the LLM created it in the first place. But at some point you kind of get to like, "Okay, I own this. I need to go figure out why this is." And I'll say that normally when I start digging into those problems, I find that the LLM has done something really dumb, and I have to kind of go back and fix a bunch of other stuff.
Learn Straight from the Experts Who Shape the Modern Web
- 250+In-depth Courses
- Industry Leading Experts
- 24Learning Paths
- Live Interactive Workshops