Bare Metal JavaScript: The JavaScript Virtual Machine

Physical & Virtual Machines

Miško Hevery

Miško Hevery

Qwik Creator (Previously Angular)
Bare Metal JavaScript: The JavaScript Virtual Machine

Check out a free preview of the full Bare Metal JavaScript: The JavaScript Virtual Machine course

The "Physical & Virtual Machines" Lesson is part of the full, Bare Metal JavaScript: The JavaScript Virtual Machine course featured in this preview video. Here's what you'd learn in this lesson:

Miško discusses the virtual machine's role in the communication between code and the CPU. Parts of the CPU, including registers, arithmetic logic unit, and program counter and how they interact with memory, is also covered in this segment

Preview
Close

Transcript from the "Physical & Virtual Machines" Lesson

[00:00:00]
>> Let's talk about CPUs. I don't think this is a picture of a CPU, or if it's a picture of a CPU, it's a really old one. Because CPUs today have a lot more legs. I think they're called legs, I don't know what they're called, pinouts, or whatever.

[00:00:12]
What it is is you have a chip, right? And the chip has a silicon. And silicon is protected in this black packaging, right? So this is really just packaging, what you're looking at, it's not actually silicon, silicon's inside of it. And Silicons have wires out, and these wires have to be connected to the rest of the system, and so they have these pinouts.

[00:00:29]
Usually, we do probably bold grid arrays or other things these days because a CPU today easily can have 500 pins, easily. Maybe the latest ones are pushing thousand, so, you need a lot of legs to kind of wire up, solder, et cetera. So the trend that's actually happening is that, inside of this physical packaging, you have multiple pieces of silicon.

[00:00:53]
So you'll have a silicon for the CPU and a silicon for memory right next to each other, and then they'll just interconnect them. You might be wondering why silicon for CPU and memory actually have to be separate, why can't they be in physical, same silicon? Turns out that the manufacturing process for DRAM is incompatible for the manufacturinng process for logic.

[00:01:13]
And so they can't physically put them on a same silicon, they have to actually have two separate silicones that have to go through separate manufacturing processes. Okay, so I write something like this, right, I write a array, and I maybe write a map on it, and so if you look at it there's so much stuff going on in here that are strings there's a concept of an array, there's a concept of a class, right?

[00:01:35]
Array is a class that has a method called Map, there is a property map that exists on some prototype somewhere. There is a function that the VM has to create that represents the mapping operation that are happening over here, right? So there is a lot of things that is inside of just a simple line of code like this, right?

[00:01:58]
And so all of this has to get translated to machine code. Machine code is this thing on the right. It's just numbers. It's really just numbers. That's the only thing that CPUs understand. It's just numbers, right? And so the question then becomes well, how does it happen, right?

[00:02:14]
Like this is crazy magic. The thing in the middle is what we call assembly, which is basically numbers converted to a friendly human-readable text. It kinda represents in between state. But fundamentally that's what a VM is, right? VM is translating the thing you write into all these numbers that the CPU can go and execute.

[00:02:40]
And so again, that's what the thing does. So there's many different VMs out there. I mean, the most famous one probably is the V8, right? So many VMs for JavaScript there's a lot more VMs there's Java VM there's other languages as well Python and so on. This is for JavaScript spider monkey and Script core core script.

[00:03:04]
>> JavaScript core.
>> JavaScript core. That thing. Yes. That from Apple, right? And so, surprisingly, I think on high enough level all these VMs really fundamentally work the same. I mean, the implementation details are different, the way they lay out things in the memory is different, etc. But fundamentally, they work on the same way.

[00:03:21]
So if I can explain to you how one works, I'm hoping that all the other ones Will be not too far off. And so, as I said, in physical machines, physical CPUs, we have, at the end of the day, all we have is numbers. Those are stored in memory, right?

[00:03:37]
So you have a memory, and you can think of memory as a humongous array of numbers that's all it is, right And then we have a CPU, and CPU can do things like arithmetic operations. It can add numbers, subtract numbers, divide, multiply logical operations on it, and so on and so forth.

[00:03:56]
And you have this flat memory that somehow you have to turn into, like, interesting things like objects with structure. If you have conditional jumps, which is kind of like an if statement, sort of, but like a very primitive one, and that's all you have. And you have this thing called subroutines that if you squint squint hard enough, you can say like, it's kind of like a function but not really, right?

[00:04:20]
And out of these primitives, you need to build up. Up this rich world of strings, local variables, memory management with garbage collection like nice, beautiful if switch statements, looping things so you can do four while do right. Primitive types, objects and arrays Classes, functions, closures, exceptions, stack traces, none of those things exist, right?

[00:04:44]
And so the VM has to create an illusion of all of these things for us. And the way it works is that you have a big array of memory, and you have a CPU. And CPU has what is known as registers, basically, those you can think of those ere as local variables.

[00:05:04]
All right, very few different CPUs have different amount in the demo I'm gonna show you I think only have two registers. All add six, some CPU, I believe had like eight or somewhere in that range. I think modern CPUs probably have 60 to 100 kind of a range of CPUs, but it's a limited number, right?

[00:05:27]
That's tiny in comparison to the size of the memory. Then there's the ALU, which stands for Arithmetic Logic Unit. This basically knows how to do math. Any kind of math you can think of. Multiplication, division, subtraction, whatever, math goes here, right? And, one, there's a couple of special registers.

[00:05:47]
In this I don't know why they pull out this special or just go to program counter there's other ones like stack Connery and other things. And then what a program counter does is basically just has a pointer to a location of the next instruction to read, right? And so the way this works is that the CPU on its address bus, this shows a line but it's a 64-bit CPU will have 64-bit wires, will say to the memory, I need the data from this specific location, right?

[00:06:18]
And then the memory will send back the data on the memory bus right here. Again, 64 bits, or you can see how we can easily get to 500 pinouts on a CPU, right? Because each of these buses require as wide as the bits you have. This sends the instruction in.

[00:06:33]
And this instruction tells the CPU what to do. Instruction might say, take this number and put it on register A, or take this number from register B and add it to register A or something of that sort, right? And the program counter automatically increments and basically reads the next location in memory for the next thing that the CPU is supposed to be doing, right?

[00:06:53]
So you have this program counter that is just incrementing, reading the data from the memory, and each number that shows up is a specific instruction, meaning this number means you are to add these two things together. Or this number means you are to do this operation, right? And so that's basically what it is.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now