Check out a free preview of the full Practical Problem Solving with Algorithms course

The "Optimize Recursion with Memoization" Lesson is part of the full, Practical Problem Solving with Algorithms course featured in this preview video. Here's what you'd learn in this lesson:

Kyle explains how algorithms can be optimized by using memoization. When a path is followed, it can be saved using a combination of the starting value and the length as the index. The option-2 branch can be used as a starting point for this lesson.

Preview
Close

Transcript from the "Optimize Recursion with Memoization" Lesson

[00:00:01]
>> All right, so let's pick back up with our night's dialer solution. When we were together last time, we completed actually both option one and option two. You recall that I kind of jumped ahead a little bit with factoring out that long function for the nearby paths calculation.

[00:00:21]
That's actually option two. So, you've already done option one and option two. So, for what we're gonna talk about from here forward, starting with option two as your base branch, if you've got the repo checked out, you can start with option two and we'll be working on option three.

[00:00:37]
You also recall that we were able to test a little bit that we sort of saw the performance benchmarking of the growth of the time. And we saw that it was literally growing just a little bit more than double every time we added one more hop, scoring a little bit more than double.

[00:00:54]
And that fit very perfectly with our theoretical analysis that there was 2.222 paths on average that you could go to from any one of the digits. So that fit nicely, and it doesn't always happen that the theory and the practice, that the performance benchmarking match up, but it's kinda nice when you can see a really clear illustration of that principle.

[00:01:21]
Well, obviously that's impractical. We can't roll out to production something that's gonna be working in an exponential or even greater than exponential sort of way. So let's talk about some ways that we can optimize our solution. The first we wanna look at is we want to observe that there are quite a few things that we are doing work over and over and over again.

[00:01:43]
We're repeating work, as I've said multiple times in this course so far, the key thing is that once you do some work, don't keep redoing that work. Find some way to remember or take advantage of that work, cuz that's the biggest drain on your performance is when your solution causes more work to be done just because it didn't remember the work that it did.

[00:02:07]
Or it didn't take care of a piece of information that it could have saved, it just didn't save it. So let's look at this diagram. You recall this was kind of our recursive diagram. And the tree itself doesn't exist. It's a conceptual tree, but it represents the call stack tree, if you will.

[00:02:23]
And so, in this call stack tree, we see that there are actually quite a bit of repetition. For example, you'll notice that the f(4, 4), that's the count paths 4, 4, you'll notice that there are multiple other places in the tree where we get to f(4, 4). So we've done that work over there on the left, and then we come around to f(4, 4) and we redo all of that work.

[00:02:47]
We remake all of those calls down that part of the tree. And in fact, that's not the only place. Even in just this small little diagram, we also see all of these are being repeated in a different part of the tree. So all of that work is unnecessary, and there's the technique for remembering the work that we do, and that technique is called memoization.

[00:03:07]
Now I don't know exactly, I've read various different histories of where this word comes from. It's a little bit of an unusually sounding or spelled word. I don't know where it comes from, but I will tell you how I remember it, okay? This is my little mnemonic or something.

[00:03:23]
This is how I remember it. Because memoization is about remembering, it's about memorizing things. So the way I remember it is it's like the word memorization, but they forgot the r. I don't know, that just helps me a little, maybe it's too early in the morning for those puns, but anyway, that's how I remember memoization, memorization without the r.

[00:03:42]
So, this might sound like this is a somewhat more complex technique, but again, it will feel potentially a little anticlimactic. In terms of how much code we have to write, it's a very small amount of code change to our current existing option two solution to retrofit it to be able to be memoized.

[00:04:00]
Now, I do wanna point out that memoization is not the magic hammer, the Thor's hammer that can just do anything, right? It's not what you wanna bang every problem with. And so there are several caveats that we wanna be aware of. When we implement this, it will feel like this magical thing that we should just do all the time, everywhere.

[00:04:22]
But do keep in mind that there are only specific circumstances where we should employ this technique. Let's switch back over to our code editor. We're in dialer, and what we wanna do is memoize the countPaths function. So what does that mean? What that means is that we want to take that function and cause it to cache the result, the output, the return value from it, for any unique set of input.

[00:04:54]
In other words, take the pairing of inputs, here we have two integer inputs, the starting digit in the hopCount here on line 27, take these two inputs and treat them as if they're a key in a cache. And then by keying against that hash, if we ever call the same function with exactly the same inputs, return exactly the same output without recomputing.

[00:05:18]
Compute it the first time, remember that value, and return it. So this is actually a quite common technique. You see it much more commonly in functional programming, but the principle here requires something that may not be super obvious to you at first. It requires that this function be what we call pure.

[00:05:39]
Again, another term from functional programming. So what we need to do is, essentially adapt or wrap the countPaths function, we're gonna use a utility that we're gonna define called memoized to do that. So I'm gonna go down here to the bottom of the file, that's just where I want to put it, you can put it wherever you'd like, but I'm gonna write a function called memoize.

[00:06:02]
And it takes in the function that we want to do this adapting to, which will be the countPaths function. We will pass the countPaths function in. It takes that in, it needs to create a cache, remember. I said that we want a cache to remember the outputs from it.

[00:06:18]
So we'll just use a plain old simple object for this cache. We'll talk about a variety of little caveats that exist with generalized memoization. All right, so we won't need to produce a new function. Remember, we're adapting, so we're gonna create and return a new function which I'll just call memoized, you could call it really whatever you wanted.

[00:06:38]
The signature of the new function needs to be the same as the original function, and we're gonna take advantage of the fact that we know exactly what that signature is. In other words, we're gonna avoid trying to do any sort of general, I can memoize any function. That's not our goal here, we only wanna memoize this specific function.

[00:06:58]
In fact we could have named it memoized count path or something. We only wanna do the minimal work that we know is necessary. If you were trying to write a general utility for this, you would have to handle any kind of inputs of any data types, they could be objects, they could be arrays.

[00:07:14]
And constructing a key for any arbitrary number of and type of value inputs can get quite complicated. And that's why generally you would not implement your own general memoization utility. But in specific cases where you know I only need it for this function, I fully recommend and think it's actually better performance for you to use a purpose-built memoization just for that function or just for that purpose.

[00:07:42]
So that's what we're going to do here. This is significantly more complicated when you wanna do it in the general sense, and in those cases you should just use a utility from a library like Lodash or Ramdo or something like that. They all provide these memoized utilities. So memoized needs to take in, I think that I call them up here in countPaths, what did I call them, startingDigit hopCount.

[00:08:07]
I'm gonna shorten that, I'm just gonna call it start and length, but they're both gonna be integers that are being passed in. So it's the same signature. And the fundamental job of a memoized function is to check to make sure that the thing we're asking for, that is the inputs, that those have not been seen before, that they're not in the cache.

[00:08:33]
If they're not in the cache, we need to do the underlying work. So we're gonna just say if the cache does not have, and we need a key. And this is kind of stylistic choice here, but I'm just gonna take the two integers and concatenate them into a string with a colon between them.

[00:08:52]
You could do anything, right? Well, you wouldn't wanna do math on the numbers unless you came up with a mathematic equation, like exponentiation, that gave you a unique number. You do want these keys to be unique, but it really doesn't actually matter how you choose this key, as long as you make sure each input set is going to be unique.

[00:09:12]
You don't want any collisions where you're accidentally overwriting or returning the wrong value. Okay, so we're gonna take the start and then a colon and then the length, Into that if statement. And we will actually do the work. So we'll set it, actually I'm just gonna copy this cuz I don't wanna retype it.

[00:09:38]
We're actually gonna set that value into the cache by calling the function, and then simply return it. Now, to adapt our countPaths function with memoize, which is a fairly straightforward task, there's a couple of different ways that you might choose to do it, and some people kinda bristle a little bit of how I like to do it.

[00:10:02]
But I literally just like to overwrite the countPaths function with its memoized version, because there's no case where I ever want both versions in my program at the same time. So I will just take countPaths and reassign it. And we can do that because it was a function declaration.

[00:10:17]
You can't do that if you assign with const or something. I will just reassign the result of passing it into memoize. And that's it for the code changes that we made. I promised you that it was a bit anticlimactic to make the code changes. That's it for the code changes.

[00:10:47]
If we go back to our slides, what we can see then is that the things that we were eliminating, those are just calls that don't even need to get made. And the bigger and bigger the tree it is, the less and less of the tree actually even exists because we just skip the call and pull it out from the cache.

[00:11:07]
So conceptually, you should be able to infer from this that it's gonna create quite a significant amount of savings. But I think it's gonna surprise you just how much the savings is when we run the code. And that's where memoization sometimes starts to feel kinda magical. So let's switch over to the web page.

[00:11:26]
You might recall from before that we were starting from digit 4 and we were doing things around the 20 to 25 hop count. And we were starting to see that it really started to add up. So I'm gonna redo that. And you can kind of remember the timings that we had before and compare.

[00:11:44]
So we're gonna do 20 hop count from 4, and it was 1 millisecond. And then we're gonna go up to 21, and it was 0.3 milliseconds. And then we're gonna go up to 22, and it's getting even shorter and shorter because it's remembering all of that work that it's doing.

[00:12:05]
And each time we add a new hop count, there's only a couple of extra functions that need to be called and all the rest of that is now in our cache. So I could go up to huge, huge numbers that would have taken hours or days to compute in the previous one, and they run in fractions of a millisecond.

[00:12:26]
In fact, I've run this up with very large numbers, and I'm sure some of you are starting to do this. That 500 number, which is a number that's so big that it's overflowing JavaScript's number time, it still only took 6.3 milliseconds to do all of those function calls.

[00:12:42]
Because most of that work didn't need to be done. We were just sort of naively redoing the work over and over. And by analyzing, I'm doing this recursion in a lot of backtracking and a lot of revisiting of the same kinds of work, that's what clues us in.

[00:12:58]
Hey, maybe I ought to memoize this utility and significantly improve the outcome. So you might think, well, okay, then we're done, right? If I've memoized my recursion, I should just memoize everything, and I don't need to worry about all of my performance issues? We need to be very careful about some of the caveats around memoization.

[00:13:19]
The first big caveat is that, yes, we are indeed saving a whole bunch of calls. But if we were to be doing some memory profiling, anybody care to guess how many thousands or millions of entries are cached in our house? Because think about all of the unique different combinations of starting digit and hop count that we've put into there.

[00:13:42]
It's gonna be a really, really large number, and so what we are doing is we're trading the memory usage to get a faster performance. And that's one of the classic trade offs that we sometimes have to make. We sometimes say I need it to run faster, so I'm just gonna have to give up some more memory.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now