Check out a free preview of the full Web Performance Fundamentals course

The "Chrome User Experience Solution" Lesson is part of the full, Web Performance Fundamentals course featured in this preview video. Here's what you'd learn in this lesson:

Todd demonstrates the solution to the Chrome User Experience exercise.

Preview
Close

Transcript from the "Chrome User Experience Solution" Lesson

[00:00:00]
>> There were some questions in chat actually that pointed, I dropped some links to this resource here on my screen, web.dev. And some of you weren't aware of it so I wanted to point it out, because it's a fantastic resource. This is a Google sponsored or built website that kind of codifies a lot of Google's perceived best practices on the web.

[00:00:22]
A lot of the authoritative information about the core web vitals and how they're measured and how they might change over time are here. So if you haven't been out to web.dev to take a look at this, after this course is done, if you wanna dive deeper into any of these things, this would be a great resource for you.

[00:00:41]
Let's get back to our exercise. So we pulled in some real data. Now the CrUX data isn't perfect, right? It is representing the data that Chrome has pulled in, but it's aggregated up across the whole bunch of users and it's really only showing you one point per month.

[00:01:00]
So it's saying for all of the users in the last month every single day, here is the average. Here is the piece or the P75 not the average. So, 75% of the users had this experience, or better. And this data does not necessarily match up perfectly. So we should probably have data that's pretty close.

[00:01:23]
I captured this data a week ago or so. So the roll ups might be a little different than the data you captured. But here was the web vital scores that I pulled in. And how I'm comparing them to what I perceive them to be. It's not exactly what I would have expected.

[00:01:47]
So for example, MPR, which I thought was the fastest site subjectively. Well, according to CrUX, it only had the third fastest FCP. It was 30% worse than what I measured in Chrome. There was third in LCP. 42% worse. But it was first in CLS. But it was still almost 100% worse than what I measured locally.

[00:02:10]
And so in almost every case, the field data was worse than my lab data. And in some cases, it flipped the order. So for example, the New York times had a much better set of field data, than I would have thought, than the lab data suggests. Now what could that mean?

[00:02:36]
Why might that happen? Well, the field data is coming from the real users. Some of them on fast machines, some of them on slow machines, some of them on good networks, some of them on crappy mobile networks in elevators. How well that site tailors itself to that user is reflected in the field data.

[00:03:01]
So for example, this would suggest that the New York Times is better able to downscale its content and deliver a fast experience even on slow devices. And it's better at that than NPR is. That might deliver a really fast experience for me on my best connection. But maybe for a slow device it's not so good.

[00:03:24]
So field data, shows a much different story sometimes than lab data. That's why it's really important to capture.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now