Transcript from the "Numbers" Lesson
>> Douglas Crockford: There´s only one number type in this language, which turns out to be a good thing. If you have multiple number types, as most other languages do, then there´s a possibility that you´re gonna pick the wrong type. And if you pick the wrong type then bad things happen.
[00:00:22] So if there's only one number type then you can't make that mistake. That turns out to be a good thing. Having multiple sizes of ints was a good idea in Fortran because at that time bits were really expensive. And he wanted to use as few bits as possible in each of your variables.
[00:00:42] But we're here in the future. We've got gigabytes of RAM now. Trying to save bits in a variable is just not a good trade off anymore. But we're still conditioned in our most of our programming language to try to perform that optimization, which in some cases leads to bugs.
[00:01:54] Which is insane, but that's how numbers work, and again it's going back to Fortran. It was cheaper back then to allow the numbers to spin over. But it doesn't make any sense now and the reliability problems with that are huge. How can you have any confidence in the correctness of a program where your numbers can suddenly go wildly negative unexpectedly?
[00:02:48] So (a + b) + c is not equal to a + (b + c). It usually is but it's not guaranteed to be. So the laws of mathematics don't necessarily apply. And the reason for this is that real numbers are not finite. But computer necessarily are, because they can only have a certain number of bits.
[00:03:37] Above a certain point then the number start to get blurry. So zero and one are effectively equal once you get above 9 quadrillion. But that's a pretty big number. It's more than enough to hold, say, the national debt, and probably will be for some time to come, we hope.
[00:04:02] Unless hyperinflation takes over or something like that. So we're okay for most applications. As long as you're in integer space, everything is going to be pretty good, and associativity does hold there. Now in mathematics, yeah, so there's a weird thing. So because of that, identities like this don't hold.
[00:04:26] There are values of a which will fail here. In fact are quite a lot of them, and so you need to be careful on how you think about how you do arithmetic in and what order you do the operations. Cuz changing the order which you do operations can visibly change the result.
[00:05:12] In binary floating point you cannot accurately represent binary fractions. We cannot exactly represent binary fractions, you can only approximate. And approximation is not good enough when you're dealing with money. Cuz when you're adding up people's money, people have a reasonable expectation that you're going to get the right answer.
[00:06:17] For example, there is not a trunc method which will trancate a floating point number to a integer towards zero, but you can add one. You can add to number.prototype.trunc this function, and then instantly, every number in your system now knows how to truncate itself, which is an amazing thing.
[00:06:36] This is a level of extensibility that is rare in programming languages. Now with great power comes great responsibility, so generally, you only want to be messing with prototypes of primordial objects in a library. You don't wanna be messing with them in applications. But libraries can do things like anticipate what's going to be in the next version of the standard and can polyfill the languages so that old editions like ES6 can run programs that newer editions can also run.
[00:07:39] The idea with the math object was some of the math functions can be expensive, particularly the trig functions. And in some configurations like the microconfigurations, they might not need trig functions. So they wanted to put them in a place where it'd be easy to throw them out. They never did that cuz it turns out that's not a place where you want to save money.
[00:08:55] One of the weirdest things that we inherited from the IEEE format is NaN. NaN stands for not a number. But it is a number, if what is type of NaN and it says number. So you start with that confusion, it is result of undefined or erroneous operations. And that's actually a good idea.
[00:09:15] In earlier languages if you did something like try to divide 0 by 0 It would cuz an interrupt which would cuz the program to halt. Which turns out to be a very bad thing. So instead we have this signal where instead you get NaN back and so you can ask at the end of a computation, are you NaN, then we that a division by zero happened.
[00:09:38] That's much easier than trying to trap it.
>> Douglas Crockford: It is toxic, so if you have any arithmetic operation with NaN as an input, you'll get NaN as an output. The weird thing is that NaN is not equal to anything itself, including NaN. So if you say is NaN = NaN?
[00:09:55] It says false. And even weirder is NaN not equal NaN is true. Which is insane. Mathematics, right? I mean, we have this NaN not equal NaN.
>> Douglas Crockford: Crazy.
>> Speaker 2: [INAUDIBLE].
>> Douglas Crockford: [LAUGH] Yeah, there's a lot of mathematical absurdity in the IEEE format.
>> Douglas Crockford: So going back to Fortran, when talking about mathematical nonsense.
[00:10:30] Fortran was one of the first languages where we got the idea of using equal as the assignment operator. Which was a really bad idea, ALGOL tried to fix it, but we went the other way. So this is mathematically nonsense, right? There's no way that X is equal to X + 1.
[00:11:15] There's also number.max_value which is the biggest number that it knows how to represent. And if you add one to that you would think you would get infinity but you don't you get max value. It's because once you get above nine quadrillion, one acts like zero and it just fades away so that's true.
[00:11:37] In fact, 9 quadrillion or anything between 9 quadrillion and infinity will behave the same way. But not NaN. So NaN plus one is NaN but NaN is not equal NaN so it's false. That doesn't work. Yeah, I hate that.