
Lesson Description
The "Async Middleware & Pitfalls" Lesson is part of the full, API Design in Node.js, v5 course featured in this preview video. Here's what you'd learn in this lesson:
Scott discusses middleware factories for async error handling in Express, stressing correct middleware order, avoiding modifying responses after headers, and efficiently handling background tasks to prevent race conditions.
Transcript from the "Async Middleware & Pitfalls" Lesson
[00:00:00]
>> Speaker 1: Let's talk about Middleware factories Oh yeah, actually, here was a really cool example I was looking through my code that I actually do a lot, which is like an AC handler I personally find myself doing try-catch a lot inside of all my handlers because, again, typically inside of my handlers I'm talking to a database or I'm hitting another service—anything async that's going to fail at some point and that's going to break my server
[00:00:00]
So I try-catch it I get sick of having to do the try-catch all the time, so instead I make a wrapper that will wrap a Middleware and resolve it in a promise, so therefore I don't need to do a try-catch, right So this is like function currying in this case—a function that takes in a function
[00:00:00]
The function that it takes in is the Middleware Right, so it would take Middleware as a function, and it will then resolve that Middleware and catch any error and immediately forward that to the next function That's what this is doing So that way I don't have to try-catch the error myself
[00:00:00]
Let's say I have a Middleware called "Fetch User" that whenever somebody tries to go here, it gets the user first Before getting the user, it actually retrieves it from the database, and I'm guessing this would just send it back immediately
[00:00:00]
Instead of doing a try-catch and all that stuff inside of here and then passing it to next, I can just call a handler and pass my Middleware, and I'm done All the errors are handled, it's good to go I don't have to write that error handling logic more than one time
[00:00:00]
This is literally the exact code that I had in a rebuild I was working on Pitfalls, like I said: forgetting to call next—please call next, it will hang if you don't Wrong Middleware order—for instance, if you're trying to parse JSON before the body parts are parsed, so if you're trying to get request.body.name before you did the JSON parsing, it's probably not going to work out
[00:00:00]
You won't be able to see that The whole point is that this allows you to look at the request body if you do not have this parsing before this route It literally does not exist Order does matter Not handling async errors—this is less of a problem
[00:00:00]
The only reason I put this here is because it was a problem back in the day when we didn't have async/await or promises Everything was a callback, so it was really easy to forget to handle async errors Now, with async/await, you're already doing that on the front end, so it's not that big of a deal
[00:00:00]
Modifying your response after headers were sent—technically, you can't send headers more than once If you've already responded, you can't respond again You also should not call next If you're going to respond, it means you're done Put a return in front of your response so your code does not keep going
[00:00:00]
There are other frameworks and runtimes where you can actually respond and still continue to run, specifically talking about web hooks Vercel has a new feature called "wait until" that allows you to do an async thing and respond back to the request immediately while completing background work
[00:00:00]
This is great for web hooks because you typically want to respond back as fast as you can to a webhook request—a 200 response to let the service know you received it If the work you need to do takes a long time, this approach solves that problem
[00:00:00]
It's like an inline queue where you do the work in the background without having to wait Can Node do this Technically, yes—nothing stops you from running more code after responding However, I don't think it's a good practice outside of simple things like logging or analytics
[00:00:00]
How do you track when the background work is done You create potential race conditions and lose error handling capabilities I would only do this for very simple tasks that don't require a full queuing system, and where it doesn't matter much if something occasionally breaks.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops