Check out a free preview of the full JavaScript Performance course

The "Thinking About Performance" Lesson is part of the full, JavaScript Performance course featured in this preview video. Here's what you'd learn in this lesson:

Steve talks about three different ways of thinking about performance: network load performance, JavaScript performance, and rendering performance.

Preview
Close

Transcript from the "Thinking About Performance" Lesson

[00:00:00]
>> Steve Kinney: Cool, so let's get started. Talk a little bit about what do we mean by the work performance? And why it matters, except that seemed good on the slide, I'm actually gonna answer the why it matters and then we'll talk about how to think about it. Mostly it's a self pitch to keep you interested over the next few hours it's kind of like giving you the pitch on why it's important and then we'll talk about what to do about it.

[00:00:25]
So why does performance matter? I think the industry standard at this point is to go through a bunch of slides of statistics as a large appeal to authority, like this stuff is really important also you're already watching this course. Si it's fair to say that I am preaching to the choir.

[00:00:46]
But it is not unlikely that you have a product manager or an engineering manager that would also need to be, [LAUGH] that would benefit from this sales pitch. So we'll do it together, and arm ourselves in kind of some of the research behind this. Jakob Nielsen did a bunch of research on User interfaces and performance.

[00:01:06]
A lot of it was really kind of interesting. At about a tenth of a second, users feel that the system is instantaneous, right? They take an action and the application has responded. They don't notice any kind of lag. That is the ideal. So a 10th of a second, 100 milliseconds is kind of what we aim for.

[00:01:25]
Now is that always possible? Do we need to make a network request to do the thing, right? There's a lot of considerations to think about. But we know that users can not tell anything, I think Microsoft has done research where they make that even faster and faster. And there are imperceptible, users can notice, but for our purposes, 100 milliseconds is the gold standard.

[00:01:50]
After about a second, user's flow of thought is interrupted, they notice it, right? They could get distracted, they can lose the feeling that the application is responsive and doing the things that they're asking for. And then after about ten seconds it's almost impossible to keep a user's attention.

[00:02:10]
Right? At that point, a shiny object in the corner has appeared and the user is gone. Right? Now granted, if you have interactions on your page that take more than ten seconds we should talk, right? Ten seconds is a very long period of time. But it's also not outrageous for how long it takes a page to load, right?

[00:02:28]
In a lot of cases if we are writing JavaScript applications, right? We not only have to get those bits to the browser, but we also then have to build the application on their machines. And so that ten seconds is both for an interaction but going and visiting the page in the beginning, is also an interaction with the page, right?

[00:02:46]
So we want to keep that in mind as well. Because we know that with a lot of research that if our page takes too long to load it will have impacts on user engagement. Cool. So I'm going to show you some more statistics that make myself seem smart but really, collecting other people's really smart work.

[00:03:03]
The Aberdeen Group found that a 1 second slow down resulted in 11% fewer page views and 7% less conversion. Right? Page views and conversions for a lot of applications turn into dollars and cents. Right? And, I write software because I really enjoy writing software but I also really enjoy getting paid for writing software.

[00:03:22]
So I don't have to do other things. And I also enjoy the company I work for making enough money to pay me. So a lot of times, the performance and our care about that stuff directly translates into the work that we do and it's also about delighting users, right?

[00:03:37]
Akamai found that a 2 second delay in web page load time increased bounce rates by 103%. So double the amount of people are leaving if your page is two seconds slower to load. A 400 millisecond improvement in performance resulted in a 9% increase in traffic at Yahoo, right?

[00:03:55]
So we can see that there is this correlation between the performance of our application and effectively paying the bills. Google found that a 2% slower page resulted in 2% fewer searches, which given Google's business model means 2% fewer ads shown and you can see how that kind of fike trickles down the entire thing.

[00:04:18]
100 millisecond improvement in performance results in a 1% increase in the overall revenue at Amazon. Now 1%, normally doesn't seem like a big impact, but when you're operating at Amazon scale, 1% is a lot, right? So there's definitely, we can trace, the care about performance to a lot of times business outcomes.

[00:04:40]
53% of users will leave a mobile site if it takes more than 3 seconds to load, right? So we put all this work and care into building this website, and they're not even going to see it because it took too long. And I think the one general one makes a lot of sense in the research is that if you want your application or your site to feel faster, you need to be about 20% faster than your competitors, right?

[00:05:07]
So you need to keep that, we know that that now makes a difference. So being in fast is important, right? Yeah, let's do the thing, right? We have these tried and true things like minify your assets and of things and send it across the wire and those have been true since I started programming.

[00:05:30]
Right? But things have changed from out from under us as well. So at the same time while being fast is really important, it's also a lot harder to be a fast application. Applications keep getting bigger and bigger and bigger. A few years ago, the average web application hIt the point where it was bigger than the video game Doom, right?

[00:05:52]
So effectively, we are sending an entire video game from a few decades ago every time someone goes to visit our page. Not only is that a fun factoid, but I think looking at the graph is kind of important to see it's not, it jumped off a few years ago and here we are now in web application land.

[00:06:11]
It's increasingly increasing as time goes on. So this is beta.httparchive.org actually has a bunch of data on the basic size of the average web page traction from about 2010 on forward. And you can see that I hovered around March 15, 2013. So five years ago today. And you can see that the average, the 50th percentile was about 864 kilobytes, right?

[00:06:45]
And there's a range there. But you can see that the average today on desktop is almost double that. More than double that and on mobile, ten times that size, right? So as, yeah phones are getting faster and stuff like that. Not at the same rate that we are shipping more and more code to the browser.

[00:07:02]
So it means that we need to start caring about performance if we hope to keep up. At the same time, LTE is getting slower, on two networks in particular. But in general, as more and more people get LTE, the networks are getting more and more increasingly bogged down.

[00:07:20]
Which means that as we ship more and more code, the time to get that code is taking longer and longer, which can get a little out of control. So, we need to, again we need to be dealing with this stuff because it's not going to just magically get better.

[00:07:36]
The browser is gonna get faster and the next iPhone is gonna be way better, I don't have to worry about any of this. While those things are true, they don't necessarily, the rate, is kind of a bigger issue here. So all right, performance is important, have I made that point?

[00:07:52]
Feel good about that? Convinced? Sweet. So the big question is how do we think about performance, right? I joked before that you can gzip your assets and minify them and stuff along those lines. Right? And that's great. But I think it's really interesting, I think it's important to think about the application that you're working on and what are its needs.

[00:08:13]
A really great example is if you think about a content website like the New York Times or the Huffington Post or anything along those lines, what do they need to do? What is their biggest priority? Showing you content, right? And they need to do that as instantaneously as possible.

[00:08:26]
One would argue that's true for Twitter, or Facebook as well. Is, you logged in to see your timeline, and they need to show it to you as quickly as they possibly can. Showing you a load bar is not acceptable. Right? You will leave. You'll be like I should be doing something else rather than checking on the news.

[00:08:44]
I'm gonna go back to that real work. I don't need to do this, right? And that's kind of what we're talking about with those like noticeable percentages, right? That's when people go, I don't actually need to be doing this right now. There is my boss coming. Command+Tab back to my text editor, right?

[00:08:59]
But Gmail if you think about it, we tolerate a loading bar every time we log into Gmail, right? Because there is a difference. On a content set you might be jumping from page to page to page. On a site like Gmail on the other hand, you open up Gmail and you live in it for the majority of the day.

[00:09:18]
Maybe you're switching to another tab, but we're willing to pay that cost once, because we know that once we pay the cost for loading the application it's there and it's alive so it's more of the of single page app feel. But for Gmail they need to be a much more concerned about like memory links, right?

[00:09:35]
Because if it get slower over time as the application exist as they're adding more and more to the so and so forth, that becomes untenable. While a page like The New York Times someone's bouncing from page to page to page, right? The priorities are very different for the applications, right?

[00:09:50]
So we can tolerate this, but we would not tolerate from New York Times. We have to, manage those. So I'm gonna argue today that there are three different kinds of performance. And we're gonna talk about each one individually rather than just say, performance do these things that could have been a blog post of 20 items, right?

[00:10:08]
We're going to turn, figure out how we think about performance. How some of these things work under the hood and then use that to inform what we do about it. So the three types of performance that I want to argue exist are network load performance. So, okay, we build client side applications.

[00:10:27]
Which is a unique problem insofar that, you would like this application? Let me send you the source code to the application and you can deal with everything. So we need to get that to the user before they see anything. So that's obviously important and we need to deal with that.

[00:10:42]
And that is normally when people talk about performance they're talking about caching headers and all those kinds and how do I get you to this page as fast as possible? And that's important. But I also think that especially in the world of modern single page applications, that we need to think about okay, you have the application, now what?

[00:10:59]
Right? So we're also gonna talk about parsing and JavaScript, and compilation performance, right? Because again, especially in the case of a single page app, we are sending you this entire application, you go ahead and compile it. You go ahead and parse the entire source code, and figure out what it all means and then build the application.

[00:11:16]
If we send code that is very hard to parse and compile. We wrote it on a $3,000 MacBook. This is not a $3,000 MacBook, but my work one, I didn't pay for it, but I assume that it's expensive. We have these expensive machines that have these high-powered computers.

[00:11:35]
But we then can send it to a mobile device that is a lot slower at parsing and compiling, right? But we can't do that on our side. We have to then send it to the user and hope that they can also parse and compile it quickly. So learning how that stuff works under the hood allows us to kind of one analyze and make some decisions and see where the hurt is, and two refactor our code in such a way that it is easier on the parser and compiler and all those things.

[00:12:00]
Finally, we have rendering performance. So we have it on the page. Our JavaScript is not just simply a no JS program that runs in their browser, but also it is manipulating the page, it's displaying a user interface. Right? Our JavaScript does not live in a vacuum, but instead has to deal with the dom and the browser and all of these other components.

[00:12:21]
So figuring out how that entire system works and how to make sure we don't hit large roadblocks there. So three different kinds of performance, are they equally as important? I would argue yes, but the real answer is no, right? Because again, it matters what your application needs to do, right?

[00:12:37]
And where the pinpoints in your application. So we'll also talk a little bit about measuring and how to figure those things out. All right, very cool. So we're gonna talk a little bit about some numbers to just think about. We're not gonna obsessed to much about numbers today, because I think it's really hard to, okay, every interaction need to be 100 milliseconds.

[00:12:54]
You fire up your app and then you've a got a two second interaction. I give up, I'm going home for the day. But having some goals in mind is, I think really useful. So Google has this acronym called RAIL. I guess they could have had other ones like liar, or anything along those lines, but RAIL is the one they went for.

[00:13:14]
And it's basically a set of benchmarks to think about performance, effectively goals. So, RAIL stands for Response Animation Idle and Load. And we'll look at those again in a second. But here's that response part again. This is how humans perceive interactions. We saw it before. 0 to 100 milliseconds, instantaneous.

[00:13:37]
After ten seconds, I'm going somewhere else, but even after a second there's a mental context switch. Which again is that maybe I should be doing something else. So we wanna aim for under 100 milliseconds.
>> Steve Kinney: Animations, most screens, I think the new iPad Pro is a little bit different, but most of our screens refresh at 60 frames a second, which turns into about 16 milliseconds in between every frame.

[00:14:02]
So that's how often the screen refreshes, so you go any faster than that, it doesn't matter, the screen's not gonna show anything. But you get slower than that, and users can start to know that the scrolling is getting a little janky. Or the dragging and dropping, the editor that I worked on is getting a little weird.

[00:14:16]
So it's like 16 milliseconds is really kind of important to think about, is that you wanna get a new frame to the screen, if you're trying to animate anything and have it look good. But all those 16 milliseconds are not yours. The browser's doing other stuff, some Chrome extensions that are running are doing other stuff, so you have to share that time.

[00:14:36]
Idle is another part of it which is basically anything that doesn't need to be done immediately, try to do when the user's not doing anything. Okay, now they're reading the article, now go get whatever creepy tracking software that you wanna put on there to figure out what they are doing, right?

[00:14:50]
Don't do that for the initial page load or else there is gonna be nothing to track because they're gone. And if you can get that done in 15 millisecond chunks, great, because then if they do do something, you're ready to deal with that. Finally, Load. It would be great if we could get the entire page under a second, but otherwise it's ideal for first meaningful paint.

[00:15:10]
So what is a first meaningful paint? It really depends on your application. If you are Twitter maybe it's that middle timeline, or mentions or whatever first, and then you can get the little sidebars and the search and all of those things. Trying to get to the first very important thing in under a second.

[00:15:27]
Those are important to think about. We're not going to, like I said before, obsess over them because I can tell you I took the whole rail model. I looked at the application I worked on and almost cried, right? Because like those were not anywhere near the way that I like large desktop apps in the web browser was working, right?

[00:15:47]
So I would argue it's about progressively getting a little better, right? Figuring out where the areas of hurt are in your application. Figuring out how to, okay, I know this is slow. That's step one, right? Step two is figuring out, okay I know how to read the performance timeline in Chrome and figure out, come up with some hypothesis of what to do about it.

[00:16:09]
And then step three is having some ideas of what to do, and four is then measuring to see if it got better or worse, and stuff along those lines. So when I was preparing this workshop, I wrote this blurb of originally what it was about and sent it to some of my colleagues.

[00:16:22]
And one of my former co workers, Romeeka, came up with an alternative title for this workshop called, Strategies for Optimizing Web Performance When, Honestly, You Have Like 5 Meetings Today and Have to Choose the Correct Hills upon Which to Strategically Die. [LAUGH] Right? It's very easy for, to watch some Google IO presentation where it's like, yes we rewrote the entire application as a progressive web app and now it's super fast.

[00:16:52]
Progressive web apps are actually cool you should totally watch my course on them. But it's like that is usually not something the big rewrite is not something you're gonna be able to do tomorrow right? So the goal here is, given the current world, given the current state of the thing that you're working on, how do we make it faster?

[00:17:07]
How do we make it better? And how do we repeat that until slowly but surely we start approaching those benchmarks? So it's important to think about what matters to you. And then figuring out how to measure it, right? So I talked about, maybe showing content versus not leaking memory, that's cool, but it's also hard to express as a known thing, right?

[00:17:30]
So figure out in your application what is actually the important thing, right? So again, New York Times might care about the time to first headline. Twitter might care about how long it takes you to be able to write a tweet. Maybe that should be longer for me some other time, right?

[00:17:43]
Maybe I need a delay to think about what I'm tweeting before I type it into that box. But for Twitter they wanna probably get me to tweet as fast as possible. Whatever the application is, so I work on a big email editor, right? Time for the editor to load up so you can start dragging and dropping in the modules that you use to create the email, right?

[00:18:00]
That might be the important part for us, how long does it take between log in to get to the point where you're creating a new marketing campaign? That would be an example of something that would be important for the application that I work on.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now