Full Stack for Front-End Engineers, v3

Orchestration & Load Balancing

Jem Young

Jem Young

Netflix
Full Stack for Front-End Engineers, v3

Check out a free preview of the full Full Stack for Front-End Engineers, v3 course

The "Orchestration & Load Balancing" Lesson is part of the full, Full Stack for Front-End Engineers, v3 course featured in this preview video. Here's what you'd learn in this lesson:

Jem discusses container orchestration and services used to manage containers, including Kubernetes and balancing request loads across multiple servers. A discussion regarding the different types of scheduling algorithms is also covered in this segment.

Preview
Close

Transcript from the "Orchestration & Load Balancing" Lesson

[00:00:00]
>> So in a real world instance, you have thousands and thousands and thousands of containers, these thousands of instances and you see how easy they are to come around. We need some way of bringing them online, upgrading them taking offline, it's a whole job of people to do this.

[00:00:16]
So we call this orchestration, bringing the servers online freaking offline. The one you've probably most heard of is Kubernetes, so you may have heard this term and you're like, what does it do? That's what Kubernetes does, it manages your containers for you. As you can tell by us just creating one and then we created another one, that's pretty manual.

[00:00:36]
But we've used something like Kubernetes or even Docker Swarm or Apache Mesos, we can create hundreds of containers at one time. They're all running the exact same instance of our application, and that's what we call orchestration. Obviously, we're not going to create hundreds of containers, it's not super practical for this instance.

[00:00:53]
And Kubernetes is probably its own lesson, is there a front-end master's course on Kubernetes?
>> Brian holds course, complete intro to containers gets deeper into Docker and has a section that covers Kubernetes.
>> Yeah, so if you want to dive more into containers, there's a course for that for any masters, it's a whole whole thing going.

[00:01:12]
So I talked about earlier we have two containers now, how do we get Nginx to talk to it? We're going to use a load balancer, and we need a load balancer because what if one of our servers is overloaded? But we have two servers, but all the traffic is going to one, that doesn't make sense.

[00:01:26]
Why do we have multiple servers? So we can tell Nginx, if there's servers running at 90% of CPU time, why don't we route the request to the one that's running lower? In that way we balance the load across all of our servers this is a really important concepts if you're talking about multi container deployments.

[00:01:47]
There's different types of algorithms, and they're called scheduling algorithms for deciding which one is should go to, round robin is default, so that just means 1, 2, 3, start again 1, 2, 3. There's IP hashing, so you're saying all IPs from this particular block or domain go here, all IPs from this one go here.

[00:02:07]
There's random, implied, random, don't really know what that one's good for but you know.
>> [LAUGH]
>> Yeah?
>> There's a really cool article on two random choice choose the least connections and least load you have to sample over all your servers. If you have a lot of them that might take a while, if you do true random choices and choose the loss of the two, and you get that surprisingly even distribution.

[00:02:36]
>> Yeah.
>> And you only have to look at two instead of 1000.
>> Yeah, that's good thinking, you take random choices and then you parse those down the least connections versus getting those server load for each one. Yeah, that's realy sharp, yeah. And one of the scheduling algorithms we can use is least connections, but again there's a cost too, you have to know how many connections are on current server.

[00:02:57]
Then there's least load, again, you have to measure the load of each server. And I've talked about load but we haven't actually looked at how to see that. So run the htop command in your terminal, I'll go ahead and do that too, htop. And that shows a kind of nice breakdown of what exactly are servers doing.

[00:03:24]
I wanna say a Mac equivalent would be like, application monitor? No, someone remind me.
>> Activity monitor.
>> Activity monitor, I never remember activity monitor, I don't know why. And pretty similar concepts, this one's much shinier because it's actually GUI. But this is useful if you're like, why is my server so slow?

[00:03:52]
We can look at and see there's a node process that's running, so we can just peek kill that node and watch our server load drop, so right now our server loads pretty good. We're only using point 0.1, 0.7% and we have all our a lot of free memory full So I just Ctrl C out of that.

[00:04:11]
So htop is nice to know, see what your servers actually doing. So now let's implement a load balancer in Nginx because we now have two servers so we can actually load balance from one to the other. So what we're going to do is we're going to create an upstream something called an upstream, and that just says, hey, these are actually my server clusters that I want to connect to.

[00:04:36]
We give it a name, we'll probably call this node back end or something like that. We give it a list of the servers that belong to that server cluster. And then in our server block we just proxy path to that server cluster and Nginx will take care of the rest for us.

[00:04:53]
And if we want to implement a different algorithm in that upstream, we can say at least con, but we'll keep it at default as round robin. So let's go ahead and do that now.

Learn Straight from the Experts Who Shape the Modern Web

  • In-depth Courses
  • Industry Leading Experts
  • Learning Paths
  • Live Interactive Workshops
Get Unlimited Access Now