JavaScript Performance

Reducing Parsing Times Solution

Steve Kinney

Steve Kinney

Temporal
JavaScript Performance

Check out a free preview of the full JavaScript Performance course

The "Reducing Parsing Times Solution" Lesson is part of the full, JavaScript Performance course featured in this preview video. Here's what you'd learn in this lesson:

Steve walks through the Solution to Reducing Parsing Times Exercise underscoring the importance of measuring the impact of a tool before adding it to the build process.

Preview
Close

Transcript from the "Reducing Parsing Times Solution" Lesson

[00:00:00]
>> Steve Kinney: So yeah, basically optimize-js tries to figure out ways to do things that would have been lazy. And then eager parsed, to just skip to the eager parsing, because doing it once is faster. So we can grab some different libraries, pull in jQuery.
>> Steve Kinney: And you can see that without optimize, it took 16 milliseconds to parse.

[00:00:20]
With optimize, we shaved a millisecond off, or almost two milliseconds off. You can grab Lodash, again two milliseconds. And if we're aiming for getting everything on the screen, this is one library, right? It could be really useful to go ahead and every one of these milliseconds that we shave off is important.

[00:00:41]
This is not even our application code yet.
>> Steve Kinney: ThreeJS is 5 milliseconds faster, right? So you were like, yeah, use it all the time, right? But these are, again, contrived examples, right? Did anyone try it in other browsers? What was some of your findings?
>> Student: Chrome is way faster with jQuery than Firefox.

[00:01:05]
>> Steve Kinney: Yeah, each one is implemented differently, and there's some different decisions that are made there. And so it's important to not be the person only testing Chrome, but to look at other browsers as well. Because this is where some of the performance stuff with the measuring becomes incredibly important.

[00:01:22]
So if we take a bunch of libraries, and we run it through, we can see that In Chrome 55. Generally speaking there is a noticeable improvement across most of the applications. You're like, ship it, yeah, let's put it in our Webpack build pipeline. Everything's great, this is gonna be wonderful, use it everywhere.

[00:01:42]
>> Steve Kinney: But the story across libraries is not as straightforward on Edge 14, right? We can see that, yeah, ImmutableJS, there's a 30% improvement, but if you're using PouchDB, well it's 32% slower, right? And if we just follow these rules, put these parentheses everywhere. You could be making applications slower.

[00:02:06]
You can be making applications faster, so that's why it's really important to measure. We get a slightly different story if we look in Safari where it's almost universally a bad idea, [LAUGH] right? In most cases it is making stuff slower. That comes down to the implementation details of all of these browsers, right?

[00:02:26]
And so again, the kind of moral of the story here is to make sure that you're measuring, right? And looking to see what the actual impacts and implications of these things are. Rather than just following some rule that I told you, or something along those lines. Treat me with a healthy dose of skepticism, and every word that's coming out of my mouth, right?

[00:02:44]
[LAUGH] Take those words, measure them against your application, then make a decision.
>> Student2: Can I ask a quick question?
>> Steve Kinney: Yeah.
>> Student2: When we're doing function declarations and wrapping them in parens. If you're using a function expression instead does that make a difference here?
>> Steve Kinney: I think, I reserve the right to be wrong about this, but I believe that those are also still lazy parts.

[00:03:05]
But IFFEs, for instance, those immediately invoked function expressions, we wrap them in the parentheses, and immediately call them. Those, for instance, are eagerly parsed right out of the box, right? So there is different use cases, and the other kind of again disclaimer that I'm gonna make now is a lot of these rules change, right?

[00:03:20]
The Chrome team is working on making this stuff better, right? Which is why measure early, measure often, right? When we get to the actual compiling engine and the optimizations that it makes, right? That's a moving target, right, there are things that are slow now, when ES6 originally came out.

[00:03:38]
All of the ES6 stuff that was for the browser was slower than the ES5 stuff. So it would be very easy to ,read a blog post then and believe that ES6 was inherently slower. It was slower because browser inventors had ten years to optimize. And figure out all the edge cases for the ES5 stuff and the ES6 stuff was brand new, right?

[00:03:57]
Three years later, that is not true anymore right? Because the browser vendors are attempting to get better and faster with these things. So generally speaking, all these things are a moving target. And that's why I keep saying measure almost a beat in between every slide. If you did want to use this, if your research goes well.

[00:04:17]
And you can see that you have performance gains across the board. There are also Webpack plugins, we can just insert this into your build pipeline as well. Just some other quick, tasty notes on parsing.
>> Steve Kinney: Whenever we have a function that is inside of another function, that is effectively always going to be lazily parsed, right?

[00:04:39]
So if we move that out, it can get parsed in a separate take on it, in this case it will get better performance right? You can argue does that mean, let's take a 15 minute break, you go through your giant code base and refactor everything? Probably not, there are tools that can help you with this, that we'll cover at the end, but again, just something to kinda think about.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now