Blazingly Fast JavaScript

Other Performance Considerations



Blazingly Fast JavaScript

Check out a free preview of the full Blazingly Fast JavaScript course

The "Other Performance Considerations" Lesson is part of the full, Blazingly Fast JavaScript course featured in this preview video. Here's what you'd learn in this lesson:

ThePrimeagen discusses various optimizations and improvements that can be made to a codebase. They explore the impact of different changes, such as updating logging, using linked lists, changing data interchange formats, and optimizing logging. The instructor also emphasizes the importance of testing in a real environment and making informed guesses to improve performance.


Transcript from the "Other Performance Considerations" Lesson

>> Now you gotta start playing the game of, what am I trying to improve? Memory or update? I bet you the logging made a big difference. We could actually kinda check that to see if it did actually make a big difference. I'll go like this, get commit, pull update and logging update.

I'm gonna check out. There we go. We can check out, how about this one? Git checkout to, we've already kind of done that one. So I guess we'll do this one really quickly. So we'll do a tsc, we'll erase this, and I'm gonna go to-3000. So we're gonna actually try to push through 3000 games on our third optimization.

Now remember, our third optimization contains second and third. No updates to update yet. And we really weren't seeing a difference back then. So are we seeing a difference now? So I'll go here, and now that we've made several changes to update, the best possible improved algorithm, reducing logging.

There's still one left which is getting rid of a radar shift, which I think we could do. I haven't tried to do it live, I haven't practiced at once. But I don't think it'd be too hard to do live, I could write a quick link. I could write a linked list.

I could do this. There's nothing better than saying I can do this and then you write a linked list, and you just totally suck at it. All right, so at 3,000, not doing very great so far, our ratio is 43%. We've kind of, I bet you were probably pretty underwater at this point.

All right, so maybe 42. So it's higher. Let it keep on going. I'll give it, we'll go to 6 million frames. By the way, this is legitimately something that I do frequently, which is just, it comes down to exploring, guessing, and looking at graphs. There's not gonna be a solid way unless if you can rewrite the algorithm.

Rethinking about the problem is the best way. So that project specific one, which is don't check all the bullets, just check the front ones, that is gonna be a huge win because you're changing the problem. And that's where the big wins come, is change the problems. I git check out blazingly fast.

And we'll do this once more. This will be blazingly fast, that is not how you do blazingly fast. Totally the wrong acronym there. So hopefully this one will do something. I'll be kind of sad if this one doesn't do anything. I'll probably just weep a little bit on stage.

It could have done something, we might, maybe, we'll see. We have plenty of time to watch this thing get screwed up. Just import Assembly and write it as Assembly. I mean, that's an option. I'm not that good to write it as assembly. So maybe there's a there there, maybe we're actually starting to see something attach faster, right?

We're sitting at 49 versus the current 32. My guess is that number will go up next time I refresh it, which I'm really scared to do. We'll see 36. 36, see I just felt it, I felt in my bones that it was going up, just because all the games are running, how V8's handling memory collections changing, right?

How they handle it in the beginning is probably a little bit different than once they start developing heuristics and how long they can run. And so there's a lot of give and take when it comes to these things. There we go. So did we actually improve anything? At this point, I don't think we improved anything.

And so even though it looks like it might, it doesn't look like we are actually improving anything. Again, this is where the problem of local host testing comes along, which is we're not testing it in a real environment. We're not testing with a real machine. We're not testing with two CPU cores.

We're not getting all the good stuff that you really want to get in production testing. These changes likely would result in something a little bit better in production, but as of right now, there we are. And so the last things we could try to do is add in linked list changes.

That's about it as far as run goes. We have onMessage, onClose. These things are using sets to look up stuff. Is there a way that we can reuse this data and create a pretty clever pull here? We could technically do something that's very akin to the garbage collection, which would be, we could just test it out to see if it actually works.

You wouldn't even know if it works. But you could create an array in which you're storing everything. And every time you get done with a state and you wanna release a web socket, you could drop its index and add that index to a free list, and then use that free list number to know where an empty space's at with an unused state.

And then on every single web socket that comes in on open, you give it an index into this larger array. So you're always using these spots really, really quick. So to get a new index requires an array.pop, but to look up the state just requires an array access.

Would that be better? Maybe, maybe not. It's very, very hard to tell. And so this is the fun part about performance, is that it always ends like this, which is what's left to do unless you can dramatically change the algorithm. Now at this point, we have no algorithm left to change, and we're just trying to scrape around to see what is and what isn't affecting stuff.

Another thing we could really do that would probably make somewhat of a difference, maybe it would make difference in aggregate, is instead of writing out to disk every one second, write out to disk every five seconds. Do we need one second resolution on writing out all of our data?

Probably not. We're just aggregating anyways. The amount of time that goes by doesn't change how much data we're writing out. So we could have longer intervals between writing if we really wanted to squeeze out every last item. And so this is just really the final part, which is just, you got to just guess at the very last part, right?

And so, we already went through all of these things. I just didn't do any of this. What about on message, linked lists, stop using JSON. This would also be another really good one, which is right now we're sending JSON messages, which means every single on message call is decoding a block of JSON that says fire from this player.

Why not just send up a number? Why not just send up number one for fire? Like why send up something that I have to take an array buffer, turn it into a string, take that string, turn it into an object by crawling everything out of it. Yeah, json.parse is super fast.

It has all these great optimizations to it, still has to do it. It's really faster than parsing JSON, not parsing JSON. So that's like the fastest form of JSON is not JSON. And so this is another thing. Once you have a very set model, you can explore using things like flat buffer.

If you know your service is never gonna change, or unlikely to change, using alternative wire formats can yield really big wins. And I've definitely seen this a lot in production. We recently had a service that, it was completely pegged by JSON. It was doing a lot of stuff.

It was spawning things. It was running games. It was doing all sorts of stuff. And the thing that was taking up the most amount of time on the machine was just JSON. And so even by simply getting a good zero allocation protobuf item, which protobufs are still eager parse, you still have to parse them.

The moment you get them, you can't defer parsing, even that alone made such a big impact that it was no longer the bottleneck. And so JSON can be very, very expensive. So changing interchange formats are huge.
>> If you have a deeply nested object, how much of an impact would it have on memory when you're creating a constant or variable that just references that deeply nested versus just using the path to that value instead?

>> Well, I don't know what V8 does does as far as CPU optimizations. If you have a hard-coded path to an object, does it have some sort of cool offset jump? Does it have some sort of jitted opportunity to make that access really fast, or is it looking it up at every single dot?

I wouldn't know, but I often would just guess that that's almost never your problem. Your problem often is that you're just doing something too many times that's in bigger scope. Or you're creating memory that you don't need to be creating. It's almost never something as simple as, instead of using this, make a reference to that.

If it was, I mean, that'd be fantastic. But if that was the case, everything else we would do, would also be 10 times more expensive. Cuz if that's observable, then, I mean, a radar push a radar pop has to be even more observable then, cuz you have to do so many more things.

But this has always been one of the biggest wins I've seen, is parsed a format over the wire starts making a real difference now. Obviously, if you're going from the server to a client, JSON is probably your only choice. But if you don't have to use that and you're in a place where you can change that, probably some better options out there that you can try.

And when I say don't use JSON, I also don't mean use XML. So please don't take that out. That's not what I'm suggesting using, okay? Conditional logging, we talked about that. Logging can be very heavy. I've seen logging become the bottleneck multiple times. So careful about logging. Logging is great.

Logging is fantastic. Logging can become quite annoying.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now