Blazingly Fast JavaScript

Analyzing Web Socket Results

ThePrimeagen

ThePrimeagen

terminal
Blazingly Fast JavaScript

Check out a free preview of the full Blazingly Fast JavaScript course

The "Analyzing Web Socket Results" Lesson is part of the full, Blazingly Fast JavaScript course featured in this preview video. Here's what you'd learn in this lesson:

ThePrimeagen makes some changes to the code to update the program to use WebSockets instead of promises. They define a WebSocket object and create functions for handling WebSocket events such as on close and on message. They also make some optimizations to improve the performance of the program. Finally, they run some tests to measure the impact of the changes on memory and performance.

Preview
Close

Transcript from the "Analyzing Web Socket Results" Lesson

[00:00:00]
>> So we have our little state map because again, before we get the WebSockets passed in, we've used those we've listened to those in this new paradigm. We no longer do that, so we kind of have to make some basic changes to our program. Let's click this, export, I can type on close.

[00:00:18]
We're gonna have a web socket. I'm gonna call this thing a WS. Cuz right now we don't really have it defined, and I'm gonna define WS as something with ascend that has a string that returns void, right, I believe I have to do that. There we go. So there we go.

[00:00:33]
We're gonna keep our web sock definition super simple at this point. Also, if you do these kind of things, sometimes it can make testing really easy if you wish do that level of testing. I'd be a little skeptical if you're doing that level of testing, but it can make things pretty dang easy.

[00:00:46]
All right, awesome. So on close, thank you, Copilot. I'm gonna get the state. If this thing happens to exist, we'll say, hey, this thing's been closed. Awesome, very, very easy. Let's see, what's the other thing we need to do? We need to do on message. I believe I actually already have an on message like function down here.

[00:01:04]
So I'm gonna just bring that on message function up here. We're gonna export it. We're gonna delete the inside and we're gonna have a web socket that's a WS and a string message. Looks good, yeah, that looks good. And so we'll grab out the state. If there is no state, so for whatever reason, we got a message from something in which we're not keeping track of, which should never happen, but if this did happen, I'll return.

[00:01:29]
I mean, I could process, got a board, we could dump, we could do something different, but we'll just assume that this is good enough. There we go. We have an on-message function that works based on a WebSocket, we have on clothes based on WebSocket, you're probably going, hey, didn't we just fix a performance problem by switching from array or from sets to array.

[00:01:47]
Why are we using sets now? I would say this is probably a fine time to use sets if we've made a good bargain, meaning that we have great performance win on WebSockets for a small performance hit on having to do this lookup every single message slash unclosed, find trade-off for me.

[00:02:02]
I'd say that this is a perfectly fine trade-off. All right, now that we've done that, we're gonna have to go down here and round the fixes. First off, get out of here, we don't need that. Next off, do we need any of this? No, we don't, we don't need that.

[00:02:12]
We just simply need to go ws2state, set websocket 1, 2 state 1. And then copilot will probably do the other one for me. Set web socket 2 to state 2. Change these to something that just has the send method, right? We're not being very prescriptive here on the web socket anymore.

[00:02:30]
Awesome. So this looks pretty good to me. We have all the mappings and then we have to do one more thing, which is at the end of our game, we can't forget to delete this things out, right. So I'm gonna delete out. There we go, we'll delete out player one, player two, make sure our map isn't just growing too big, awesome.

[00:02:51]
And then finally, take our game runner, turn that to that, and I'm pretty sure that is everything. Let's see, we have no more diagnostics. I think we're gonna do this first try. So there we go. So we've made that pretty simple update. I've updated our actual server to be using something like this.

[00:03:08]
And we are now updating our actual game to be using this different kind of look. So all I have to do here is obviously import these two from game. Fantastic, feeling pretty excited about this. If you can't keep up again, the point is that I want to actually spend as little time on the code as possible, cuz I think that this is the most boring part of the entire presentation, is the code.

[00:03:32]
The exciting part is how did we come to these conclusions? And again, if you forgot, the conclusion really was memory. And I saw some in the performance and the gut intuition that JavaScript is just never gonna be as fast as C+++. We can process this out of band of JavaScript.

[00:03:48]
So that's very exciting. All right, so now that we've done that, I'll do a little, make sure that we don't, okay, I did accidentally have that other server still running. Somewhere around there, so we'll go npm run test. We'll make sure I haven't broken the program. If I broke the program, that's genuinely too bad because it's just like.

[00:04:09]
Look at that, it's working, let's go. All right, so this should all work out just fine, we shouldn't have any problems. I'll wait it out just to really make sure, that way you all believe me. So the WS package, for those that don't know, the WS package is fully JavaScript.

[00:04:24]
It uses full internals of node and everything, whereas UWeb sockets is gonna be fully C++ and only having the JavaScript for handing out frames, which to me is as good as it gets. I'm happy about that. All right, so we know that our integration tests work. We know that our little tests work.

[00:04:42]
We are now using WebSockets. We no longer have a bunch of promises. So let's see if this UWebSocket runs better. So I'm gonna do the exact same thing that I have been doing, but it might be a little bit more fun to start off first with. Let's just check the memory charts and see, do we make a difference there?

[00:04:59]
Let's just I wanna just kind of look a little bit, see if we've done anything there. So I'm gonna go tsc, we're gonna inspect, and what did we run last time with? We ran with 2000. Let's keep on going with 2000. So we're gonna run this, awesome. And so I'm gonna start with a quick performance grab, make sure we're all run.

[00:05:21]
Okay, awesome. We're gonna grab this for ten seconds and we're gonna look at this and just see if anything's changed on the performance side, my guess is not a lot's changed. If we've done our job right, update should be taking a very large portion of the time. We're gonna go over to memory.

[00:05:36]
I'll start another recording. We'll come back here. We'll look at update is taking an even larger chunk of time. So this is good. It looks like it's just update and run and on message, which is doing that json decoding and pretty much everything else is not taking a lot of much time.

[00:05:52]
So we're now really taking our library and just making it smaller and smaller and smaller. So if you had a server like this, the amount of stuff you could put on it now. Analytics, any sort of extra logging for debugging anything else you now have so much more room for because we made our one program excessively better running than it was before.

[00:06:13]
I'll let this keep running for just a second longer because again, the longer you let it run, the better. Now remember from here to here is Websocket or WS shall I say?
>> I just wanna comment that I really like your take about giving yourself more headroom for other things.

[00:06:34]
For instance, on front-and-masters.com, how fast it is we wanted to AB test something and that AB testing libraries, extremely slow. So we were able to use it just kind of targeted and I don't think it really impacted the user performance too much because our site is so fast that like, okay, using this in one case, fine, we were able to rip it out.

[00:06:59]
Keep going but I feel like just having an emphasis on performance gives you that headroom to add observability or logging or testing or these kinds of things.
>> People don't realize how important those things are and if you pay yourself now, you pay yourself later when you need to make a change.

[00:07:17]
Cuz inevitably someone adds rammed, someone adds low dash, someone adds all these extra things running, and then your program inevitably gets slow again, right. There is no such thing as programs that don't go through these cycles where everything's really great and then overtime becomes really crappy and usually crappiness is not, we just need to simply swap out just this library and things will be faster.

[00:07:41]
And so it's really just having this mentality of everything you do should be done really well. Do you really need to create all these objects, do you really need to be doing all these things because if you take that mentality through, you will always be programming yourself pretty thin.

[00:07:55]
And then as time goes on, it will remain pretty thin and then you'll just keep on having that headroom for an extended period of time. It's really nice to do, now look at it. We've literally taken out the middle section. That entire thing is just gone. This is what it was from here to here, was that?

[00:08:11]
Now it's completely gone. We actually do feel like something else better is happening. All memory is being managed within C++ except for the array buffer that comes out, which we just stringify. And so that is actually making a big, big difference. And on the performance, you'll notice that we just really don't see stuff other than fast buffer is array, there's just not a lot that's happening anymore.

[00:08:34]
We really only have just these three are the last things that are doing anything on our program. So for fun, let's do third optimization 500. Even though I don't think 500 is even gonna be in our realm of looking at anymore, we've made a program so fast that 500 is just not even testing our server.

[00:08:55]
And then we'll start off with 500. So here we go. We'll run a third optimization, which contains optimization number two. So don't forget that. So let's just open up one more. All right, look how small that text is. Okay, so this would be third optimization 500. That doesn't look great.

[00:09:29]
That did not look great at all. Yeah, so, Weird. So for whatever reason, it looks like we're having a huge regression and things being ran. So that doesn't, that seems confusing, are you running the right thing? We should be running the right thing. Okay, we're going down. Regret sunsky skill issue, this could be a skill issue.

[00:10:03]
We're gonna keep on going though. Yeah, we're starting to slowly slide down. That's fine, we'll stop it there. We'll now go into 1,000. Again, not an exact science. Again, always measuring production. I swear this works. Okay, for whatever reason this one's looking way better. False positive optimizations, can be false positive optimizations very well could be?

[00:10:42]
So let this run. Okay, fantastic. All right, fantastic. We did that one, we'll do one more. We'll do 2,000 now. And I honestly think we could probably go up to, we'd probably see ourselves doing pretty well into the 3,000s at this point. So one thing that gets really interesting is as we get to this point we may not be giving our server enough load to actually see, are we making changes?

[00:11:41]
Are we actually testing it under load or are we just just measuring it how it runs normally? It's still looking pretty good, it's below our target goal at the original part of this. We started off with our boss recorded at less than 25%. We're looking less than 25% right now at a much higher amount.

[00:11:58]
So pretty happy about that, and so it doesn't look like it's made a huge huge difference. At this one at number two at 2,000 you'll notice that it was at 32. With our new change we're at 22, okay, so we're starting to really see benefits as we're getting towards the higher end.

[00:12:17]
As we get under load, we're starting to see something that's a little bit better. So I'm happy about that. We're not really seeing much happening in the lower amounts. So why is that? I can't really explain that, is it local host? Is it that it really doesn't matter?

[00:12:33]
I actually have no idea why it does these things at lower volumes, but at higher volumes, as we get under pressure, we do see large percentage changes cuz remember, 33 to 23 is way more impressive than 60 to 50, right? Cuz that change is significantly faster. That's like a what, practically almost, we're getting close to a 50% speed up on this.

[00:12:54]
So very happy about that, fantastic. C++ is still really good. Yeah, maybe it's the overhead of C++ in the smaller amounts actually makes some level of difference, but as we get bigger, it just scales better than the JavaScript. But we have made some good changes here. We're gonna take third optimization.

[00:13:10]
We'll put first optimization on top and see now that we've really stripped everything else away, are we gonna start seeing the update function actually making a difference now? My guess it will probably see a difference.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now