Check out a free preview of the full Advanced GraphQL, v2 course

The "Caching" Lesson is part of the full, Advanced GraphQL, v2 course featured in this preview video. Here's what you'd learn in this lesson:

Scott recommends systematically running client-side caching, explains that in GraphQL application caching is preferred by most, but that popular opinion is moving over to HTTP caching, and recommends using HTTP caching.

Preview
Close

Transcript from the "Caching" Lesson

[00:00:00]
>> Now, I wanna talk about cache. It is a very important thing. It's no point of building all this and become an expert in deploying, if we don't understand the elephant in the room, which is caching what GraphQL is, it's a big problem, or at least it used to be announced.

[00:00:14]
It's gotta be installed. So there's many types of caching. We have application caching, and that's basically stuff we're doing inside the code, like things in the database, things on the resolvers. Anything that's revolves us writing code at runtime, execution time, that's application caching. So database, external data source, resolvers.

[00:00:35]
Then you have network caching, and this is probably what a lot of us are used to is something on a CDN level, we're using some cache headers. We're doing something like CloudFlare, or whatever we're doing. And we're caching our rest GET request on the CDN is so easy, cuz the URL just works, and it's good to go, that's network caching.

[00:00:54]
So HTTP caching, stuff like that, and then, we have client side caching. So if you take in the Frontend GraphQL course with react that we have, you understand client side caching. But if you've ever used anything like Redux, or Vuex, or any client side state management, that's client side caching.

[00:01:12]
So that's the other type of caching you will do. So let's talk about Application Caching. This is the preferred way to cache GraphQL right now. It's changing, but it's the preferred way, because it's the easiest to understand, there's more available tools and scenarios, and you can go pretty far with it.

[00:01:36]
Although this is changing, they're moving over to network caching, and we'll talk about that. But right now, and is the preferred way to cache. And you have many options and levels to cache depending on your server. So again, you could be a traditional Redis database in front of, I don't know your Mongo database, or actually use something like Mongo, and you got your indexes, right.

[00:01:55]
You don't even need Redis, cuz it's already indexed. And sitting in a good cache. So, it really, you have ultimate flexibility is more traditional to what you're used to. You can also use things like data loaders, which are like per request caches for GraphQL. So that prevents people from having these circular queries that they send you, and you haven't go to the database to ask for something that you already got.

[00:02:20]
To resolvers ago, you can catch that per request. So things like Data Loader help with that. So yeah, a lot more options when you're caching application side. I think, that doesn't really help you if you wanna be the closest to your customers all across the world and on different nodes.

[00:02:35]
You need something on the network layer, but at least when they do go to origin, the data access is pretty fast. So a bunch, and then, there's also, like I said, the reason a lot of people do this, cuz there's just a bunch of misunderstandings around HTTP caching and GraphQL.

[00:02:50]
And also, because the technology is just now getting there to take advantage of the HTTP network based caching with GraphQL. So let's talk about that. So HTTP caching, it can be a more complicated, unless you use services and features like Apollo cast control, Apollo engine, and automatic persistent queries.

[00:03:10]
So what I'm talking about there, is I'll just show you. So Apollo cache control allows you to use directives on types, so you can cache on a tight basis with different ttl, and stuff like that. These directives basically help HTTP network calls on header level, understand what to do, it gives a meta data on what to do to handle caching.

[00:03:36]
So it makes caching a lot simpler, very easy to work with, this is a system that you have to opt in to, and you need an interpreter, so something [INAUDIBLE] needs to actually interpret this. And right now, the only thing that I know of that automatically interprets this for you is a paid product by Apollo.

[00:03:53]
So you gotta buy the product to interpret this. But there's nothing stopping you from writing your own interpreter, especially using some of the technologies that I'll talk about. So that's pretty good. I've never used it, never had to use it, but I've heard it works pretty well. You also have what I mentioned here, which is persistent queries.

[00:04:12]
A persistent query is basically, instead of writing your queries on the client, and then, whenever those queries need to run an execution time at runtime, you send them up, and then, they get evaluated, and they get executed, and the result comes back. What you do at build time, is you have all your queries on the client.

[00:04:28]
And before you even deploy, you send all those queries up to the server with tool, or whatever, however you do it. And the server pre validates all the queries right there. And it saves them in some type of database, or something like that. And it gives them all like an ID.

[00:04:40]
So this is query 1, this is query 2, this is query 3, this is query 4. And then, now, on the client, all you have to do is just say, just run a graphical query for query one. And it will just execute that query less across the wire.

[00:04:54]
And because it's just a get request with a parameter in it, it's cacheable. So you can cache it. You don't have to do all this crazy stuff. It's gonna cache the same way as every other rest API calls ever cache before. And you get that for free automatically with Apollo.

[00:05:08]
So this Apollo client on the front end, and Apollo server on the back end, that automatically works for you, you don't have to do anything. It automatically does it. So when it makes, when Apollo makes a query, the first time from the client is gonna make another query to get the ID, and try to persist it, and try to store it for you and memory.

[00:05:24]
So it'll try to persist those for you. You don't have to worry about it, it doesn't do like a pre-validation step, but it will persist the queries, cuz it's trying to get you to opt in to their caching implementation, which is really cool. So yeah, there's a lot of features like that that you can get now.

[00:05:39]
You can also use Edge Applications to program your own cache logic. So when I said, you need an interpreter to interpret that metadata that's being generated. Well, you couldn't really do that before until now. So now, you can use something like cloud flare workers, which are basically JavaScript apps on the edge.

[00:05:59]
And you can just write JavaScript literally on a CDN. They're not even in beta anymore. So really cool package to use. Different alternatives are lambda edge or lambda at edge. There you go, which is basically the same thing. It's writing lambda functions on the edge of CloudFront, which is AWS, CDN product, pretty cool.

[00:06:24]
This is one of the first ones I've used. I mean, it's cloud front. You can't get more powerful than that. And then, there's other ones like fly IO, which is a really cool product that kind of does the same thing, JavaScript at the edge. This one's open source, though.

[00:06:37]
So you can kinda hack in and do all types of crazy stuff. But all of these allow you to write applications on the edge. In fact, you can deploy a GraphQL server on the edge, and have your graphical server be a gateway to all your other services. So there's a lot of different things you can do there.

[00:06:50]
But at minimum, you could definitely cache whatever you want. You can take an incoming post requests that's a GraphQL mutation, converted to a Git request, and cache it globally if you want. There's a lot of different strategies that you can take advantage of other than just interpreting cache control from Apollo's cache directors.

[00:07:08]
So pretty cool, all this stuff is relatively new like what in the last two and a half, three years, so, is coming of age, and it's really awesome. So I'd highly recommend checking on this stuff. You can really get to the point where you don't even have an origin server anymore.

[00:07:22]
You just deploy apps to the edge. And that's where your app lives. You never even have an origin, which you think about it sounds ridiculous. But it's possible, especially, if all you have is like a single page app, it's really possible. The other thing is, you can restrict or handle mutations over Git, which is basically what I said.

[00:07:38]
So inside the edge applications, you could change puts to Gits, and things like that, you can make sure everything is over Git, which is possible, you can send the mutation out with a Git request. It will just be bound by the browsers, restrictions of how many characters you can have in a URL, and things like that.

[00:07:57]
But yeah, you can totally do it, and get queries, work with Git already, even though they might default to post, so they automatically work. There's nothing stopping you from doing all that, you just have different restrictions. Client side caching, Paulo handles this pretty well, or your own implementation like Redux.

[00:08:15]
UX, RXJS, NGRX, whatever your flavor is, you can just cache it yourself, pretty simple. And then, again, persistent queries in coordination with the server. The client kinda takes over and does a lot of that stuff, too. But this isn't really specific to GraphQL, this is just client side in general, you should just cache things from the server, and your client, that way, you don't have to always go to the server if you don't need to.

[00:08:39]
So how should you cache? If you're able to use ACB caching, enabled. So when I say you're able to, is basically using some of the technologies that I just walked over, maybe you got an edge CDN that you can program, too. Maybe you're using Apollo's paid service, I don't know, however implementation you want, if you can do it, I highly recommend just throwing it an HTTP, and then, just never having to deal with application cache, because there's just so many options, and so many different ways.

[00:09:05]
And so many different things that you have to deal with, I would just throw it on edge. Whatever implementation you use, if it has an API that you can call from your API, to bust the cache and add new things to it, you're in a perfect world, like you literally, you will almost never hit origin.

[00:09:21]
So that sounds like a fantasy, but it's possible. And I would say, cache, any external HTTP data sources. So if your resolvers are talking to other HTTP sources, like a rest server, or I know you have an SDK that's talking to stripe, and you don't need it to be updated all the time.

[00:09:41]
I don't know, if you wanna cache stripe thing, but anything that's not on your server, and your resolvers are talking to it, cache those things, because you you don't want your server being dependent on whether, an external server is slow or not, so you probably wanna cache it, and that thing crashes.

[00:09:57]
What do you gonna do? Your whole, it's, you have this whole query that's got all these fields on it. One of them goes to strive, strives down. So your whole feels, that kind of sucks. Maybe you do want that, but I don't know. So you probably wanna handle a lot of that stuff.

[00:10:11]
And caching is just one of the ways you should handle it. And of course, do client side caching. Client side caching is gonna solve a lot of problems. And if you're only working on the server, then that's not even your job. So let them do it. But client side caching will definitely solve a lot of problems here.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now