This course has been updated! We now recommend you take the The Good Parts of JavaScript and the Web course.
Transcript from the "Numbers" Lesson
[00:00:00]
>> [MUSIC]
[00:00:04]
>> Douglas Crockford: There´s only one number type in this language, which turns out to be a good thing. If you have multiple number types, as most other languages do, then there´s a possibility that you´re gonna pick the wrong type. And if you pick the wrong type then bad things happen.
[00:00:22] So if there's only one number type then you can't make that mistake. That turns out to be a good thing. Having multiple sizes of ints was a good idea in Fortran because at that time bits were really expensive. And he wanted to use as few bits as possible in each of your variables.
[00:00:42] But we're here in the future. We've got gigabytes of RAM now. Trying to save bits in a variable is just not a good trade off anymore. But we're still conditioned in our most of our programming language to try to perform that optimization, which in some cases leads to bugs.
[00:01:00] That doesn't happen in Java Script cuz we only have one number type. Unfortunately it's the wrong number type. JavaScript uses IEEE-754 floating point, the 64-bit type, which is weirdly called double. Which is a stupid name for a number type but that's what it's called in Java. It's because in Fortran it was called double precision and in later languages it was shortened to double.
[00:01:26] It doesn't make any sense, but we have that. There's no integer type, but integers are a subset of floating point, so you have integers mostly. So JavaScript does not have an int type. And I think that's actually a good thing. Cuz one of the properties of ints in most languages is you can have two answers that are greater than zero and add them together and get a result that is smaller than either of the other ones.
[00:01:54] Which is insane, but that's how numbers work, and again it's going back to Fortran. It was cheaper back then to allow the numbers to spin over. But it doesn't make any sense now and the reliability problems with that are huge. How can you have any confidence in the correctness of a program where your numbers can suddenly go wildly negative unexpectedly?
[00:02:19] It's nuts, so hurray for JavaScript it does not have this defect in it. It has different defects, but not this one. JavaScript has number literals. So you can use decimal points. You can you use scientific notation. All of these represent exactly the same value. So one of the problems with computer arithmetic in general is that the Associative Law doesn't hold.
[00:02:48] So (a + b) + c is not equal to a + (b + c). It usually is but it's not guaranteed to be. So the laws of mathematics don't necessarily apply. And the reason for this is that real numbers are not finite. But computer necessarily are, because they can only have a certain number of bits.
[00:03:12] They can not have a infinite number of bits. So there's always the possibility that numbers are going to be approximate. And that approximating means that associativity doesn't hold. Now in JavaScript, integers up to 9 quadrillion or so are okay. So you can add one to a number up to 9 quadrillion and it will always work.
[00:03:37] Above a certain point then the number start to get blurry. So zero and one are effectively equal once you get above 9 quadrillion. But that's a pretty big number. It's more than enough to hold, say, the national debt, and probably will be for some time to come, we hope.
[00:04:02] Unless hyperinflation takes over or something like that. So we're okay for most applications. As long as you're in integer space, everything is going to be pretty good, and associativity does hold there. Now in mathematics, yeah, so there's a weird thing. So because of that, identities like this don't hold.
[00:04:26] There are values of a which will fail here. In fact are quite a lot of them, and so you need to be careful on how you think about how you do arithmetic in and what order you do the operations. Cuz changing the order which you do operations can visibly change the result.
[00:04:48]
>> Douglas Crockford: The most reported bug in JavaScript is some variation on this. That you can add decimal numbers in different ways and get different results, 0.1 + 0.2 is not = to 0.3 and lots of other things. And this is not JavaScript's fault, this is because the IEEE binary floating point standard.
[00:05:12] In binary floating point you cannot accurately represent binary fractions. We cannot exactly represent binary fractions, you can only approximate. And approximation is not good enough when you're dealing with money. Cuz when you're adding up people's money, people have a reasonable expectation that you're going to get the right answer.
[00:05:32] And you don't when you're using binary math. So, in retrospect, JavaScript should have used a decimal floating point format. Because it turns out we do a lot of applications in decimal. And being able to work accurately in decimal is only important if your on a planet that uses a decimal system.
[00:05:53] And you know that's where we are so this is recognized as a mistake. JavaScript has a number of methods on numbers because numbers are objects. In many languages they're not but in this one they are. You can add new methods to number.prototype and increase the functionality of the language.
[00:06:17] For example, there is not a trunc method which will trancate a floating point number to a integer towards zero, but you can add one. You can add to number.prototype.trunc this function, and then instantly, every number in your system now knows how to truncate itself, which is an amazing thing.
[00:06:36] This is a level of extensibility that is rare in programming languages. Now with great power comes great responsibility, so generally, you only want to be messing with prototypes of primordial objects in a library. You don't wanna be messing with them in applications. But libraries can do things like anticipate what's going to be in the next version of the standard and can polyfill the languages so that old editions like ES6 can run programs that newer editions can also run.
[00:07:16] Numbers are first class objects in this language. They really are objects. So a number can be stored in a variable, they can be passed as a parameter, they can be returned from a function, and they can be stored in an object. Object and numbers can have methods. JavaScript borrowed a bad idea from Java, well borrowed a lot of bad ideas from Java, one of them was the math object.
[00:07:39] The idea with the math object was some of the math functions can be expensive, particularly the trig functions. And in some configurations like the microconfigurations, they might not need trig functions. So they wanted to put them in a place where it'd be easy to throw them out. They never did that cuz it turns out that's not a place where you want to save money.
[00:08:04] But JavaScript made the same mistake. It would have been much smarter if all of these have been methods on numbers but they're not. So there's this weird dot math dot thing that you have to do in order to access them. The other thing I was crazy about putting this in a disposable thing is that some of these functions, like floor, which is the method for converting something to an integer, is a critically important thing.
[00:08:35] But they wanted to put it in a disposable object that wouldn't be in all implementations, they were clearly not thinking this through. So anyway JavaScript got this wrong in exactly the same way. It also includes a bunch of consonants as well, which for no good reason are written in uppercase.
[00:08:55] One of the weirdest things that we inherited from the IEEE format is NaN. NaN stands for not a number. But it is a number, if what is type of NaN and it says number. So you start with that confusion, it is result of undefined or erroneous operations. And that's actually a good idea.
[00:09:15] In earlier languages if you did something like try to divide 0 by 0 It would cuz an interrupt which would cuz the program to halt. Which turns out to be a very bad thing. So instead we have this signal where instead you get NaN back and so you can ask at the end of a computation, are you NaN, then we that a division by zero happened.
[00:09:38] That's much easier than trying to trap it.
>> Douglas Crockford: It is toxic, so if you have any arithmetic operation with NaN as an input, you'll get NaN as an output. The weird thing is that NaN is not equal to anything itself, including NaN. So if you say is NaN = NaN?
[00:09:55] It says false. And even weirder is NaN not equal NaN is true. Which is insane. Mathematics, right? I mean, we have this NaN not equal NaN.
>> Douglas Crockford: Crazy.
>> Speaker 2: [INAUDIBLE].
>> Douglas Crockford: [LAUGH] Yeah, there's a lot of mathematical absurdity in the IEEE format.
>> Douglas Crockford: So going back to Fortran, when talking about mathematical nonsense.
[00:10:30] Fortran was one of the first languages where we got the idea of using equal as the assignment operator. Which was a really bad idea, ALGOL tried to fix it, but we went the other way. So this is mathematically nonsense, right? There's no way that X is equal to X + 1.
[00:10:49] Except there are values that you can put in for X which make this true. For example infinity, JavaScript has a value called infinity. Which represents all numbers which are bigger than the biggest number that it can accurately represent. And so infinity plus one equals infinity. I's true, it's an equation.
[00:11:15] There's also number.max_value which is the biggest number that it knows how to represent. And if you add one to that you would think you would get infinity but you don't you get max value. It's because once you get above nine quadrillion, one acts like zero and it just fades away so that's true.
[00:11:37] In fact, 9 quadrillion or anything between 9 quadrillion and infinity will behave the same way. But not NaN. So NaN plus one is NaN but NaN is not equal NaN so it's false. That doesn't work. Yeah, I hate that.