Exploring Service Workers

Proactive Background Caching

Kyle Simpson

Kyle Simpson

You Don't Know JS
Exploring Service Workers

Check out a free preview of the full Exploring Service Workers course

The "Proactive Background Caching" Lesson is part of the full, Exploring Service Workers course featured in this preview video. Here's what you'd learn in this lesson:

Kyle explains how to slowly load posts in the background by caching all posts by ID during the service worker's initialization.


Transcript from the "Proactive Background Caching" Lesson

>> Kyle Simpson: We just mentioned that those blog posts aren't getting kind of proactively cashed. They're only getting cashed when we have visited them, which might entirely be the strategy you wanna use. You might not want to stuff stuff into somebody's cache if they haven't actually utilized it. But you might want to load old historical blog posts kind of slowly in the background while they browse around the site.

That way, if they happen to go offline and happen to be clicking around, they're just pleasantly delighted to find out that there's a lot of content that's already there. So we're gonna do a very basic form of that. I've updated my version, and I'm gonna keep track of whether or not I'm actually doing this.

But it's gonna not be all at once, it's gonna kinda be with long delays in the background. So I'm going to, when I start up my service worker the first time, start trying to cache all of the posts. And here's my strategy, I'm gonna make a request to the server to ask for all the current post IDs, that API request.

And I'm gonna use all those post IDs to generate the URLs that I wanna make a request and cache for. So here's my cacheAllPosts function. If I'm already caching, because that's been spun off by something else, then just return. Go ahead and delay five seconds, this is just a promise if I'd set timeout, by the way.

Just go ahead and wait five seconds so that we're allowing other stuff, the initial page load and initialization to happen. We don't wanna take over their bandwidth right away. But then, just kind of in the background, what we're gonna do is make a request for get-posts. If we don't already have it, we're gonna stick it in the cache.

>> Kyle Simpson: If we are not online, we're gonna try to get it out of the cache and use the cached version for what we wanna pull out. So once I have a list of post IDs, if there's greater than zero, if there are posts that I wanna deal with, I'm gonna call this cachePost function with whatever the first post is.

So I'm gonna work from current all the way backwards. We want the most recent ones first, and then we'll get the older and older ones as we go. So cachePost just constructs the URL from that post ID. If we're not telling the cache function that it should forcibly get all these, then we'll just check to see if we already have it.

So if we already have it, no reason to re-request it. But when we've done a service worker update, just like we get all of the other stuff, we should get all of the new blog posts, in case any of them have been updated with new fixes, the typos.

Okay, so if we're not checking the cache or it's not in the cache, then if we know that we need to cache it, let's just wait ten seconds between each request. Cuz again, we wanna be very friendly to the network, we don't wanna overload the network with a bunch of requests unnecessarily.

So we'll just wait ten seconds, make the request for that blog post. If we find it, put it into the cache. And then if that failed, then we'll retry it again. So we'll try it, and if that failed, try again in ten seconds, and if that failed, try again in ten seconds.

And it'll just sort of sit there, going over there on the page, it'll be attempting to get these old blog posts. If that has all succeeded, then we check to see if we have any more caches and then we do it again. So we're just gonna, every ten seconds, just try to keep loading ones, we're successful, move on to the next one.

And if we add thousands of posts, just eventually, over time, somebody would have gotten all of those posts. But they won't get all of them necessarily all at once, they'll get a few dozen each time they visit, for example.
>> Kyle Simpson: Okay, so let's demo that working. So the first thing I'm gonna do, we need to go back online, and I'm gonna go ahead and clear out my cache.

I'm gonna kill my service worker, clear out the network, clear out my console.
>> Kyle Simpson: And we'll start up on the about page. And I'm gonna come over here, and I want us to watch, there's our get-posts, we were waiting five seconds for it. Now we're waiting ten seconds.

And in about four more seconds, we should see the first post got cashed. And now we're waiting ten more seconds, and then,
>> Kyle Simpson: The last post got cached. We check our cache. Sure enough, even though we haven't yet visited these blog posts, the blog posts are in our cache.

And if we go offline, we'll be able to go visit them.
>> Kyle Simpson: This still feels like a website, and that's kind of the point that I wanna make here, above all. We're not trying to turn this into an application. We're just trying to turn this into the kind of user experience that somebody would expect if they weren't thinking about all the vagaries of bad networks or servers being down or things.

They would just kind of expect it to work, and we're now giving them that extra effort, we're making it work.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now