Check out a free preview of the full Web Performance Fundamentals course:
The "Improving FCP Practice" Lesson is part of the full, Web Performance Fundamentals course featured in this preview video. Here's what you'd learn in this lesson:

Todd explains how to improve the first contentful paint (FCP) metric and walks through how to simulate this improvement in the course website. Content Delivery Networks (CDNs) are an effective tool to improve FCP because they cache content on a server closer to the user's region which reduces the distance the data needs to travel.

Get Unlimited Access Now

Transcript from the "Improving FCP Practice" Lesson

[00:00:00]
>> We are starting to look at how to improve the individual Core Web Vital Matrix. And so let's start with the first contentful paint. As a quick reminder, the first contentful paint is the time from the page start until the user sees an indication that the page is going to load for them.

[00:00:21] Another way of saying that is we need to respond quick. So how do we do that? How do we make sure that we're responding quick? Well, let's conceptually think about our website again. So what all websites inherently do is you have a set of servers or hosts or virtual machines or whatever it is.

[00:00:42] And you need to have a set of documents that you need to deliver across the network to your users. And so there's a few different parts of this that can all contribute to what is slowing down first contentful paint. Your servers need to be quick. You need to deliver small documents that can be sent efficiently, and the amount of hops through that magical cloud network needs to be as short as possible.

[00:01:10] And so you need to focus on these three things to improve your first contentful paint. So there's different tactics for each of these three. So your servers, how do we make sure your server is quick? Well, we need to make sure that your server is sized correctly for what you're doing.

[00:01:30] This is the first thing that we need to do in order to improve it is focusing on your servers. Now the specific changes to your servers is going to be very dependent on what you're doing and what technology stack you're doing. But essentially, you need to focus on three things.

[00:01:47] First, sizing the hardware or the virtual machine or your network or whatever your constraining factors are correctly. You need to make sure that you have enough overhead in the machines that you've selected to deliver the content that you need. Second, you should minimize the processing you need to do in order to fulfill the request.

[00:02:06] As in, if you just need to serve an index.html page to your users, you probably shouldn't be making calls to your database in order to fulfil that request. Unless there's something especially interesting that you need to do with that. And third, the bandwidth, the bandwidth you have to your servers needs to be big enough in order to fulfil all the requests that you have coming in at once.

[00:02:29] How do we make sure we're doing as small thing as possible with out documents? Well, the two major things that we can speak here is the size of our content and how we compress it. So content size, how do you deliver a small of payload as you can and still get the effectiveness that you need?

[00:02:47] So this is gonna be again, dependent on your application, but if you're delivering an HTML page or a JavaScript file or an image, there's certain upper limits on how much you should send. For an HTML document, probably if you're sending anything more than 80 or 100k in terms of total size, that is just way too much, you're putting too much on a page at the same time.

[00:03:18] An image might be a meg at its upper limit, right? For how big an image should be on the Internet. And if you're sending out bigger things than that, then you're just sending too much content to be consumed over this medium. Compression, so even if you're sending a 100k HTML document, how you compress that document over the wire can greatly improve that speed.

[00:03:43] Most commonly on just about any platform, you can turn on Gzip compression. And for newer web platforms, you can turn on more advanced compression such as Brotli. Now this is going to be very specific to your technology stack and so I'm not gonna be able to go specifically into what compression you should use.

[00:04:03] But compressing your documents is gonna be able to greatly reduce the number of bytes that you're gonna push over the wire and send fewer and do fewer things. The third part that is under our control is how we transmit our data. So, if we take a look at that cloud and zoom in on it a little bit, there's actually more than one thing happening here.

[00:04:27] There's where our servers are in your data center or in Amazon or in Microsoft or in Digital Ocean or wherever you host your content. And there's a series of network hops in their own infrastructure. And you largely don't have control of that. On the other side, you have your users, they're connected to their own ISP or their own wireless network or whatever.

[00:04:52] And that has a series of hops to manage that network. Between them is what's connecting those two networks. And these are all the infrastructure of the Internet. And this time is something that we can control. We can control based on where we place our servers and how far away they are from our actual users.

[00:05:16] Now this time is actually more interesting and more relevant than you might think it is. For example, the minimum time that can be if your servers are on the East Coast of the United States, and your current user is on the west coast of the United States, is that's going to be 200 milliseconds?

[00:05:33] There's just no way to get it any faster than that if you have to communicate that difference. This is obviously due to the speed of light and how fast network hardware can switch packets. So there's a ton of performance benefits to be gained by reducing this distance. So how do you reduce this distance?

[00:05:55] Well, the most effective, easiest way to do it is to use CDNs. A content distribution network service is a way of taking the content, those files that your servers are spitting out. And rather than sending them through the entire series of tubes that is the Internet every time.

[00:06:13] What if we just had a copy of them right at the edge of every one of our users networks? So that they can just grab it right from there without having to get across the Internet. And that's what most CDNs are doing. CDN like CloudFlare or Akamai or maxCDN, or there's dozens and dozens and dozens out there if you look up CDN servers.

[00:06:37] And most of these, what they will do is when a user makes a request, the CDN will pick it up. They'll call your stuff, if they don't already have a copy. And they will just cache a copy of your response and serve it out to every single user who might ask for it.

[00:06:52] And this is dramatically faster than actually doing the real work. Essentially, we're doing fewer things by not processing each request across the entire network. So this is largely about putting infrastructure things in place. Which is very, very hard to demo, and largely dependent on your infrastructure. But our example repository has a way for us to put server upgrades in place.

[00:07:20] So let's do that together and simulate what impact or performance doing a server and infrastructure update would have on request metrics. Okay, I'm gonna flip back over here to our code. And in order to do this, we're gonna need to pop open package.json, this is something that I made configurable in this library.

[00:07:45] There's a couple of options here under config. And these are some specific keys that we're looking at that my simulated server is going to honor. Now because setting up a CDN and setting up server compression is gonna vary so much depending on what platform you're using. We're not gonna cover the details of that.

[00:08:06] We're just going to turn it on for this Express server that we're operating right now. So I'm gonna change the server delay from the 200 milliseconds it's representing now as though my server was on the East Coast and my user was on the West Coast. To something more realistic of what I would see on a CD end website of a 50 milliseconds server.

[00:08:29] This is essentially a simulating that we're putting our infrastructure much closer to our users. And I'm gonna change server compress to true. So it's basically just telling my server we're gonna attach very simple compression. We are gonna Gzip our content before we send it over the wire. Save the JSON file and then you're going to want to stop the process and run NPM start again to pick up the changes.

[00:09:00] So we've completed some server upgrades or simulated server upgrades. If we refresh our Request Metrics page, we should see a much faster response to our service. If we go into our DevTools and look at, say, the network of these requests, we can see that all of our requests are coming in at around 50 to 60 milliseconds, representing that speed increase.

[00:09:29] And if we click on one of these, we can see that the content that we are getting back from our server is now Gzipped. So what does this mean? What is the actual impact of this? Well, to see the improvement, let's go back to Lighthouse. And if we did it from before, this was our initial snapshot, right?

[00:09:51] We hadn't done anything to our servers. And here we are at Lighthouse scores. Let's go ahead and run it again now that we've made these upgrades. And we're just gonna click the little plus sign in the upper left, and generate a new report with the exact same settings.

[00:10:08] Huge improvement, right? I went from in the seventies to in the nineties, and my First Contentful Paint went way up or improved considerably. It's now at 0.6 seconds, and Largest Contentful Paint is only a little bit behind it. So simply adding this made a huge impact in the performance of the site.

[00:10:31] Now, unfortunately, it doesn't help everything. It's still the same site with the same choices with the same order of operations. And so there's other metrics that are still slow. It's still slow till we are actually done with the page. It's still shifting around a lot. It's still doing some things we don't want, but we made a big impact.

[00:10:51] You can see this impact reflected in your field data as well. If we refresh the page a few times, The perf.log.csv is gonna continue to update. If you paste that across into our dashboard, you should see a significant decrease in both the first contentful paint and the largest contentful paint.