Blazingly Fast JavaScript

Revisiting the Memory Profiler

ThePrimeagen

ThePrimeagen

Netflix
Blazingly Fast JavaScript

Check out a free preview of the full Blazingly Fast JavaScript course

The "Revisiting the Memory Profiler" Lesson is part of the full, Blazingly Fast JavaScript course featured in this preview video. Here's what you'd learn in this lesson:

ThePrimeagen discusses the problem of not seeing significant improvements in the program despite making updates and explores two possible solutions: increasing the amount of connections to test if it makes a difference, or focusing on improving the program's efficiency and reducing memory usage. They also introduce the topic of memory usage and suggest checking where the memory is being allocated in the program.

Preview
Close

Transcript from the "Revisiting the Memory Profiler" Lesson

[00:00:00]
>> Now it kinda seems like we're starting to approach a bit of a problem, which is any updates we're making at this point, we're fighting over not a lot of wins. And so we can increase the amount of stuff. Look at that. Now our ratio's bounced all the way up.

[00:00:13]
Goes to show, hard to trust anything. We're going out of control. We're going out of control. It's going way up! So who knows what's happening there, but we'll just stop it anyways. Maybe we didn't measure the other ones long enough. But either way, we're looking like we're starting to get something happening.

[00:00:27]
At this point, we've made an improvement. We can see that it's better, but we're still not really seeing any sort of change in our program. So I think there's only a couple kind of answers we have to do. One, we can try increasing the amount of connections and saying, okay, let's bring down this thing and really make sure we're under load.

[00:00:43]
Does it actually make a difference at a higher level? Can we go to 3,000 connections and see what happens? Or B, at this point, we can just say, our only goal is to improve our program, and damned be current metrics. We just wanna see all of our processing be as small as possible.

[00:01:00]
Cuz you could imagine that at some point, you will be integrating with other programs, being slower. And so I kinda like the second approach. For me, this is kinda the fun side. And I've had to do some internal projects at my job where we have to just simply make it as fast as possible because we're about to be integrating into a very large library.

[00:01:19]
And so we find every single way possible to make it use as little memory, memory pull things out, do every single trick in the book to make it quick. Cuz if you do not do it that way, your program's gonna be slow. Now, this doesn't always happen, of course, right?

[00:01:32]
Typically, when you're working on anything, you just kinda throw things in. But my previous project, I had to do stuff with, we had a CSV that when it got generated, it got so big I couldn't stand up a node instance internal to Netflix without it falling over with two CSVs being generated.

[00:01:47]
And so having to just play the game of, how can we make this thing as efficient as possible, can be very beneficial, especially as your scale starts going up. And all of a sudden, it went from generating one CSV to generating ten CSVs and not falling over by just being really aware of memory.

[00:02:04]
Now, unfortunately, we won't be doing any of those fun memory techniques. But for this one, let's start just looking at what is left to kinda clean up to really just tighten up our program and make it the best as possible.
>> First, I have to call out this awesome comment.

[00:02:17]
Writing in Rust is really a spiral. I'm not sure if it's a spiral upward or downward yet.
>> [LAUGH] That was really great. I really hope everyone heard that comment cuz that was [SOUND], it was a beautiful, beautiful comment. Which direction is that spiral? We still don't know.

[00:02:33]
But what matters is that the spiral is happening. So let's continue on with our notions here. So we're gonna kinda start covering some next kinda fun topics. And the big thing and the hard part about this is that these topics I have yet, like in this example on my local host, I've really yet to show any real difference.

[00:02:54]
But six months ago and a year ago using programs that are running on the server, I've seen huge differences. And so are we gonna see something locally? Probably not. Will you be able to reach for this in your tool bag as a way to really make big memory wins, and really, in the end, performance wins?

[00:03:10]
Probably, especially on servers with less and less cores than our modern one here. So can we improve our program based on what we now understand? Well, we understand a lot of things, right? That all of our time is spent in update and run. And what about memory? Where's all of our memory being ran at?

[00:03:24]
Do we even know where that is? So here, we've never actually even checked where our memory was. Poor form, people. Let's just make sure we know. Man, I did this backwards. It's driving me nuts. Let's take this memory and, yep, see, we can't do it backwards. It's just not ready for it.

[00:03:42]
It's just not ready to go backwards like that. There we go. All right, so we're gonna do this. We're gonna look at where all the memory is. I think this should be our shape, but just to make sure, cuz we've made a couple changes to our program, let's just make sure that we have the same shape of memory.

[00:03:57]
I don't wanna make any guesstimates or some false assumptions here, but I bet you it's gonna probably look like this. And the next couple changes are gonna probably be both great, And very disappointing, all at the same time. I love those kinda changes, or when you do them, afterwards, you feel bad about yourself.

[00:04:22]
That's really where the big Ws happen, because you know you've committed potentially some programming sins, but nonetheless, they're great programming sins. Yeah, someone's saying, hey, what do you do if you're gonna be doing memory leaks? And someone's suggesting Valgrind. Another big technique, especially when it comes to JavaScript is, you don't have to use Valgrind if you're just doing that.

[00:04:44]
You can use heap snapshots if you really do have an actual memory leak and you can watch your memory just run up. Do heap snapshot, wait a minute, do heap snapshot, wait a minute, do heap snapshot. And then you can actually look at the diffs between them or the objects that stayed alive between them.

[00:04:59]
And if you see one object type keep growing, you have a pretty good idea of where you had accidentally messed up, and so you can follow that. So we're not trying to solve any of that stuff right now. That is not our goal. So looks like if we look at our stuff, pretty much at this point in our program, exclusively, we're creating all of our memory.

[00:05:21]
So this is good to know, this is good to see. Let's try to prevent that from happening. Let's reduce the amount of memory we're creating.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now