This course has been updated! We now recommend you take the The Good Parts of JavaScript and the Web course.
Transcript from the "Introduction to Monads" Lesson
[00:00:00]
>> [MUSIC]
[00:00:04]
>> Douglas Crockford: So has anyone heard of Monads? Monads ring a bell? So Monads is this thing from category theory that found its way into Haskell, which solves a particular problem that Haskell has. And it turns out to be a generally useful concept, but there's this amazing curse that comes with it.
[00:00:26] Once someone understands Monads, they lose the ability to explain it to anybody else.
>> Audience: [LAUGH]
>> Douglas Crockford: So you can go out on the web, and you can search for monad tutorial, and read all the stuff that comes back. And everyone's going, yeah, this is the thing you need to know.
[00:00:46] And you read it and it makes no sense at all, and there's tons of it. So Monads are this functional pattern which are incredibly powerful and very expressive but at the same time completely impossible to explain to people. So I'm gonna attempt to break the curse today to give you enough knowledge about this so that you can actually go and read that stuff and go, yeah, I know what they're talking about.
[00:01:14]
>> Douglas Crockford: Cuz it's interesting stuff. So this starts in Functional Programming. And as we talked about this morning, there are two senses of Functional Programming. There is the sense that we use in programming languages, which is programming with functions, right? And any language which has functions, which is virtually every language, if you program with those functions, then you're doing functional programming.
[00:01:39] So this found its way into programming languages with FORTRAN II in 1958 There's the declaration. It was written in all caps because the FORTRAN character set only had uppercase. But except for that, it looks remarkably modern, right, with JavaScript's functions look like that. It didn't have a variable statement, but it had a dimension statement.
[00:02:03] Weirdly, variables were declared by default. So if you just used a variable without declaring it and assumed it was a local variable, which is actually a lot smarter than what JavaScript does. JavaScript assumes it's a global variable, which is very bad. And FORTRAN had types, so it had integer types and real types, but you didn't have to tell which one it was.
[00:02:26] And the way they figured it out was, if it started with an I, J, K, L, M or N, it was an integer, and if it started with any other letter, it was real. Yeah, [LAUGH] like that didn't cause any problems, right? And then if you wanted to make something global, then you would call it COMMON.
[00:02:43] COMMON was FORTRAN's word for global, so you'd declare it there. Then the return statement didn't have an expression on it. So to tell FORTRAN what value to return, you would have an assignment to the name of the function within the body of the function, and that would set the return value.
[00:03:02] So that when you finally hit the return statement or fell out, that would be the value that would go out, so the last one of those would win. And instead of curly braces it just had an END to mark the end of it. So that's sort of a boring functional programming.
[00:03:20] Functional programming gets more interesting when you have functions as first class objects which we've been seeing. In a particular high order functions, which you can pass functions around these arguments, you're starting to get a sense that you can do some neat stuff with that. And it didn't become really good until scheme showed us how to do lexical closure and then suddenly stuff takes off and it get's really amazing.
[00:03:45] Now the other sense of functional programming is Pure Functional Programming, mathematical functional programming, where functions in programming languages behave nothing like mathematical functions. Cuz a mathematical function returns the same value every time you apply it, right? You never have a function that will return one thing and then next time return something else.
[00:04:07] They're item potent, they're immortal, they're different. So you can think of functions as maps, so whether we pointed to something as a database or as computation is just an efficiency argument, that both implementations should be exactly the same. That would be one of the rules with mathematical functions.
[00:04:30] Now, the argument for wanting to program that way is that mathematical functions are easier to reason about. And it's true, they are, because for any function for any argument, it will always do the same thing. Whereas, programming in real languages gets more complicated because every time you call something, it could be different, and one of those times it could be an error, and it's harder.
[00:04:56] So reasoning about things is easier, so there've been a lot of mathematicians, who've been saying programming should be more like math.
>> Douglas Crockford: That means programming without side effects. Now, it's easy to turn Java Script into a pure function language. You just have to remove the assignment operators, get rid of loops because things change inside of a loop, right.
[00:05:19] We can't have anything changing. So you use recursion instead. You can use recursive functions to replace loops, that works really well. You would freeze all array literals and object literals as soon as they're made. So you're not tempted to try to mutate them. And you'd have to remove things from the environment, like you'd have to remove date, because every time you call date, you get a different time, that's not mathematical.
[00:05:45] You have to get rid of random because every time you call random you get a different value, that's not mathematical. So you take all that stuff out and then you're gonna have to take out the dome because every time you touch that it changes, you can't change anything.
[00:05:57] And then you realize this is getting really hard, because you're having to do, I own a system, in which nothing changes and that doesn't work. So, you know, we talked about Memoization before. It's one of the benefits of being mathematical. Cuz you can memoize and everything works. So that's sort of dynamically making the trade off between computation and storage, or mapping.
[00:06:23] Whereas, Caching, which is what we do in mutating programming, is hard and it's often wrong. Cuz we're making bets on how long some value's gonna be valid, so we don't have to go across the network to get it again. And we're often wrong so if the world never mutated, then it would be easier to always be right.
[00:06:45] But, of course, the world doesn't work that way. In the real world, everything changes all the time. And so having a programming language in which everything must be immutable, makes interacting with the world really hard. So there's this language called Haskell, which is named after Haskell Curry, Schonfinkelization of Lambdas, and the Haskell language doesn't know mutation.
[00:07:13] And if you're just doing pure computation, it's perfect, it's wonderful. But if you then start to try to extend the problem domain and have it become a practical language, then suddenly things get really hard. And so they look for someway that they could have something that was like mutation without actually mutating anything.
[00:07:34] And they found something, they found something in Monads. Monads come out of category theory which is a branch of type theory. And Monads allow them to exploit a whole and a function contract. A function contract is that every time you call a function with the same argument, you have to get the same result and cause no side effects.
[00:07:56] But if you pass a function as a parameter to a function, well every function's different, right? Cuz it's gonna be a new invocation and it's closed over something else that was never closed over before, so you're constantly in this state of newness. And you can kind of push that, to kind of create the illusion of having immutability even when you don't.
[00:08:16] So in Haskell, they came up with the IO Monad, which makes it possible to accumulate state. And one of the motivating examples is you want to be able to add tracing to a set of functions. But do that without having IO. And the Monad provided a way of doing that.
[00:08:41] And so if one of these Haskell guys is trying to explain it to you, will try to be a Socratic, and they say, okay, suppose you got a function and you want to trace it, how you do it? And say, well, I call it alert, but they go, but you can't call any functions, so how would you do it?
[00:08:58] You go, well, I guess I would have an array, and I'd append something to the end of the array. They say, no, that's mutation, you can't do that, so how would you do it? And they're all like this is some amazing game but it's a solution to a problem you will never have, and it doesn't make any sense.
[00:09:14] If you start there, you will never get to anything which is meaningful to you, so that exercise is just a waste of time.