This course has been updated! We now recommend you take the Functional-Light JavaScript, v3 course.
Transcript from the "Challenge 5: Solution" Lesson
[00:00:00]
>> Kyle Simpson: So exercise 5, as I warned you ahead of time, it was not that many lines of code. Hopefully, it was pretty straightforward. We start out with this foo function, and what we wanna do is we want it to return a function that remembers the first two arguments past the foo, and adds them together.
[00:00:20] So let's name those two parameters, x and y. And let's make and return a function. Here's our machine making machine as we talked about before.
>> Kyle Simpson: Now, this function needs to add x and y together and return that result. So that's pretty straightforward. return x + y. Let's do some analysis that's consistent with our current discussion that we've been having.
[00:01:00] Is the foo function itself pure?
>> Speaker 2: High degree.
>> Kyle Simpson: High degree of confidence that foo itself is pure. What about the function that it returns? The one that we've named x down there on line 9.
>> Speaker 2: Yes.
>> Kyle Simpson: High degree of confidence?
>> Speaker 2: Because there's no where that x and y can be redefined.
[00:01:31]
>> Kyle Simpson: Because there's nowhere that x and y can be reassigned, so we know that every time we call x, we're always gonna get that same value of 7 back. That's exactly the right mindset, that's why I keep reinforcing it so that you'll develop that instinct, how can I look at a function and know what its behavior is.
[00:01:48] So far, the best definition we've got is given the same inputs, we get the same output. Now, we wanna talk about potentially even a more sophisticated definition for function purity. But to get there, I need to tweak some understanding of how we approach this. So I wanna ask you this question, where does the work happen, the addition?
[00:02:15] Does it happen as a result of line 7? Or does it happen as a result of line 9 and 10?
>> Speaker 3: Can you ask that question again?
>> Kyle Simpson: The work that happens to do the addition of those two numbers, does that work happen on line 7? Or does it happen on lines 9 and 10?
[00:02:33]
>> Speaker 2: 9 and 10.
>> Kyle Simpson: Can we see that? That it does in fact, it waits, if you will, to do that addition until we run lines 9 and 10. In a sense, if line 9 and 10 never ran, we were kind of deferring the work in case nobody ever needed it.
[00:02:55] Now, what if that works with something more intensive than just mathematic addition? What if it was a list of a million values, and we were calculating some variance across it and it took a lot of work to calculate. And we weren't really sure whether or not anybody was gonna even need the answer.
[00:03:14] So in that case, the deferral to put it on line 3 would have made sense. Let's defer that work in case nobody ever calls the x function. No reason to do unnecessary work, right? There's a term for that in programming. The term for that is that's a lazy algorithm.
[00:03:34] Now, we don't mean lazy in the pejorative insult sense. But we need it as compared to an eager algorithm, which would have done the work ahead of time, up front. So what if we said, I wanna do the work up front, while we could have calculated that, for example, on line one and a half?
[00:03:53] Now, I wanna talk about the difference in end result. Because we are, in fact, waiting until line 9. So in case line 9 never happened, we didn't waste any work. But what do we know about line 10?
>> Speaker 2: It's gonna do the same thing-
>> Kyle Simpson: We're redoing the work.
[00:04:12] So if that's expensive work, and you expect that it's possible somebody might ask for it twice. Well, now, you might start to say maybe I ought to do it once upfront. So, what if we did the work up front? What if we said sum = x + y and then we simply return the sum variable?
[00:04:37] Now which line of code does the work happen on?
>> Speaker 2: 8.
>> Kyle Simpson: It happens as a result of line 8. And do we still get the same output on line 10?
>> Kyle Simpson: Do we still get the same output on line 11? So is x still a pure function?
[00:05:00]
>> Speaker 2: Yes.
>> Speaker 4: No.
>> Kyle Simpson: Uh-oh, somebody says no. Why is it not a pure function?
>> Speaker 4: Cuz it's no longer pure, because you're changing the value essentially within the function itself.
>> Kyle Simpson: So when does that change? You are correct that we assign the sum variable, but when does that change occur?
[00:05:25] Does it occur before or after we've created and returned the function?
>> Speaker 2: Before.
>> Kyle Simpson: Before. So observationally, the x function, by the time we get the x function, the answer to this question is already set in stone and it's never gonna change again, right?
>> Kyle Simpson: So observationally, does it still meet our current running definition for function theory?
[00:05:55]
>> Kyle Simpson: Remember, our definition is given the same inputs, in this case, there are no inputs. Given the same inputs to x, are we always gonna get the same output 7?
>> Speaker 2: Yes.
>> Kyle Simpson: We have a high degree of confidence in that, right? Unless somebody somehow mucks with the JavaScript language runtime, itself, I'm pretty sure that's what it's gonna do.
[00:06:19] But now the downside is, if that's expensive work, and nobody ever calls line 10 and 11, we wasted that work. So there is a tension between this lazy algorithm and this eager algorithm. And you might then wonder, is there somewhere in the middle? Is there a compromise between the two that gives us the best of both worlds?
[00:06:46] So what if we could say, I wanna wait to do the work until somebody asks for it. So I want to be lazy about it. But once I've done the work once, I wanna remember that and never do the work again. See how that would be a compromise between the two?
[00:07:07] So what if I said var sum, and I didn't do the work here. And inside of our x function, I said if (sum === undefined), cuz there's only one way that sum can ever be undefined is that's this is the first time we will run. Then why don't we set sum equal to that work?
[00:07:34] The next time we run, are we gonna redo that work?
>> Kyle Simpson: Nope.
>> Kyle Simpson: See how this is kinda somewhere in the middle. It's still a lazy algorithm that defers the work, but now we know that we only gotta do the work once on line 13. And we're not gonna redo that work on line 14.
[00:08:01]
>> Kyle Simpson: But now we have to ask ourselves, is x still a pure a function?
>> Speaker 2: I'm gonna stick my neck out and say yes.
>> Kyle Simpson: You're gonna say yes, or you're gonna say I have a high degree of confidence, or a mid-range degree of confidence?
>> Speaker 2: A high degree of confidence, because it's still going to always return the same output given the same input.
[00:08:29]
>> Kyle Simpson: Okay, so given that running definition that we keep going with, the blue side of the Rubik's cube, if you will. If we look only at what's happening with x, it still observationally is always gonna produce the same result. And if we don't consider any of the concerns, like for example, that the first run of x is gonna run a lot slower than the second run.
[00:08:53] If we don't consider the time delay to be something in the system that we care about, then lines 13 and lines 14 are observationally indistinguishable. Do you follow that? That is going to lead us, that observation, us going to lead us to our final evolution for function purity, our final definitional evolution.
[00:09:18] Where we look at the completeness of the Rubik's cube, if you will. The official academic functional programmer leads with this kind of definition for function purity. There's a term, sounds fancier than it really is, and this term is referential transparency.
>> Kyle Simpson: Functional programmers will say that a pure function is a function that has referential transparency.
[00:09:48] Great, thanks, we can all go home, we're done, right? What does that mean? What is referential transparency? Its sounds more complex than it is. You can sound smart at parties if you pull that out. Well, my referential transparency's intact so, but here is what really is. We take the line 13, and we know that is a function call that will return value 7, same thing with line 14.
[00:10:18] What referential transparency says is, we could take that function call, and replace it with its result, and it would have no other impact on the system. It would be unobservable to the rest of the program if we took a function call and replaced it with its return.
>> Kyle Simpson: So wherever that x call is being used, if we just took the x call and replaced it with the value 7 in this case, the rest of the program would behave identically.
[00:10:51]
>> Kyle Simpson: That's what referential transparency says, that a function can be replaced with its return value and not affect the rest of the system.
>> Kyle Simpson: Is that true of the x function? Does it have referential transparency? Can its result replace the call and not affect the rest of the system?
[00:11:15] I would say I have a high degree of confidence that this x function has referential transparency. And that is the official, most accurate definition for function purity. And you can see how our understanding of that concept has changed a bit as we've gotten a different perspective on the Rubik's cube.
[00:11:35] We started out saying, well, it just can't reference any variables outside of itself, and that turned out to not really be very complete or accurate, cuz we're only looking at it from one angle. And then we said well, given the same inputs, I'm always gonna get the same output, and that worked for a while.
[00:11:50] And now we say, well actually, you could replace the function with its return and not affect the system. That's the full and complete answer for function purity. Now, here's why this matters. You might think, well that suggests then that I should replace it. If I don't need the call, why not just compute it and replace it?
[00:12:10] We are not saying with referential transparency, that you should actually do so. It is to suggest that you could, and not affect the system. So is this just something to make us feel clever? In Haskell and other languages that are statically typed functional programming languages, they actually take advantage of referential transparency.
[00:12:36] Because they can guarantee, by virtue of how the language works, they can guarantee that a function does, in fact, have referential transparency. And so what the compiler will do is it will compute the value the first time, and then every other time after, it won't run the function again, it'll just substituting in the value.
[00:12:58] It's a lot more performant that way, right? So the compiler takes care of that for you. You don't need to do any of that stuff like what we're doing about tracking the variable and holding on to it, or. It just automatically does that for you, because it knows about referential transparency.
[00:13:16] JavaScript does not have that same characteristic, we cannot guarantee a function is entirely, completely pure. And so for JavaScript, we're not talking about the compiler doing the replacing, so why does it matter? It matters to the reader of your code. And this is me, not as an official functional programmer, but trying to help you understand what do these concepts means to me?
[00:13:44] It is to suggest that the reader of your code can compute that line once, and then any time they see that same line, they don't need to recompute it. They have perfect confidence that when they see x() somewhere else in your code, I already know that answer is 7, I don't need to recompute it.
[00:14:06]
>> Kyle Simpson: Could you see how that would help with the readability? If you had something repeated over and over and over again, and somebody was gonna have to go recompute it or refigure it out every time. What we've done by using a pure function and telling that person, this function has referential transparency, we've made it so they only need to do the work at most once.
[00:14:27] Which is gonna make repeated reading of that line of code a lot easier.
>> Kyle Simpson: That's why referential transparency, and moreover, why function purity matters. It's the ultimate aid to the reader to help them understand and trust a line of code.
>> Kyle Simpson: Questions about using closure, using these hybrid techniques?
[00:15:02]
>> Kyle Simpson: What we've done here, where I have this variable, and I changed its value and recomputed it, but only on the outside we still maintain referential transparency. That's a very ad hoc way of doing a thing that has a name in functional programming. There's a term for this kind of adaptive memory thing that's going on.
[00:15:28] And that what is called memoization, M-E-M-O-I-Z-A-T-I-O-N. Now I don't know what that word actually means, and I have no idea if what I'm about to assert is where it comes from. But in my brain when I see the word memoization, I feel like they just dropped r from that and i think memorization.
[00:15:48] Just one letter missing, because memoization is essentially memorizing the work that's done. So this what memoization does. You can take a function, and memoize it. it's an adaptation, it's a machine making machine, if you will. The new function, will take every input that you give it and store that input with the output.
[00:16:16] It will keep track of that, it will have a memory. And then the next time you call that function with the exact same input, it will find that in its memory and just give you the output back, and not recompute it. That's a memoized function. So this is sort of an adhoc one off memoization.
[00:16:37] We didn't compute the work until we ran it the first time. But then once we ran it, we remembered that work, and now we're just gonna return the result of that memory, over and over again. This is an adhoc form of memoization. There's a generalized memoize utility that can be built and comes with most functional programming libraries, where you can take a function, and if you expect that the results of that function are expensive to compute.
[00:17:05] And you expected it's gonna be repeatedly asking for it over and over, and over, and over, and over, and over again. You might wanna memoize the function. It's not really a behavioral difference, it's a performance difference. It's basically for performance. Now thinking about what memoization does, memoization is just like, it literally creates a variable on the outside of the scope that we're talking about, and that variable grows infinitely.
[00:17:38] Every time you give it a new input, it's gonna add a new entry to that cache. That's really stretching the bounds of what we would normally consider to be a pure function, isn't it? Cuz now we have this thing that's growing and the state that is changing over time.
[00:17:54] Here´s why a memoized function still fits within our definition. Because, first and foremost, observationally, the function that we are dealing with, the x function, still has referential transparency. And moreover, that cache, that growing state of things is not observable by any other part of the system. If that cache was out in a global variable, that would absolutely be side effects and we would have violated function period.
[00:18:30] But with memoization, where do you think they store the cache?
>> Kyle Simpson: In the closure, just like we've stored the cache in the closure. We've closed over the variable sum, it's the closure that holds onto that variable and holds on to that state. A memoize utility is gonna do exactly the same thing instead of a sum variable.
[00:18:56] It's gonna have a giant object cache that it keeps track of, but it's close over it. So what I'm getting at is in the usage of closure in functional programming that is consistent with functional programming is to say, we use closure to maintain state that can't be observed from the outside world.
[00:19:23]
>> Kyle Simpson: That's the whole purpose of closure is to take a function and give it a memory. Now let me give you this example, here's a function foo.
>> Kyle Simpson: Is x a pure function, does x has referential transparency in this example?
>> Speaker 4: Yes.
>> Kyle Simpson: Every time we call it, what are we gonna get?
[00:20:00]
>> Speaker 4: Zero.
>> Kyle Simpson: Everybody agree we're gonna get zero?
>> Kyle Simpson: Is x still a pure function now?
>> Speaker 2: No.
>> Speaker 4: Yes.
>> Kyle Simpson: What's gonna happen when I call x the first time?
>> Kyle Simpson: What do I get?
>> Speaker 2: One.
>> Kyle Simpson: What's gonna happen when I call x the second time?
[00:20:25]
>> Speaker 2: Two.
>> Kyle Simpson: That doesn't sound like function purity referential transparency to me. That sounds like we've just with that one simple change, we're still using closure, but just this one simple change now all of a sudden we violated referential transparency.
>> Kyle Simpson: That's why referential transparency is the definition that actually matters.
[00:20:50] Cuz in all our other definitions, everything's fine. But we fail referential transparency, so therefore we fail function purity. A consistent, a usage of closure that is consistent with functional programming closes over a state that does not observationally change. If the state in this case the ID variable observationally changes, it's not functional programming consistent, it's a violation of referential transparency.
[00:21:29] So as long as you can contain those effects and be unobservable, you've maintained referential transparency. And I go so far as to say, you can't do functional programming without closure. So it's really important for you to be able to juggle that tension. How do I use it? And how do I use it safely versus how do I use it, and everything false apart.
[00:21:57]
>> Kyle Simpson: Any other questions about this topic of closure? Does an ID always get redefine? No, ID is defined once on line 2. Basically, ID gets define as of result of line 8. And if I call x over and over and over again, there's only one ID. The first time, we're gonna return 0, the second time, we're gonna return 1, then we're gonna return 2, and so on, and so on.
[00:22:23] There's only one ID, it doesn't get redefined, cuz we're not recalling foo every time. We're only calling the result of foo, which is the x function.