Check out a free preview of the full Full Stack for Front-End Engineers, v2 course:
The "Containers & Microservices" Lesson is part of the full, Full Stack for Front-End Engineers, v2 course featured in this preview video. Here's what you'd learn in this lesson:

Jem introduces containers, microservices, and the relationship between the two.

Get Unlimited Access Now

Transcript from the "Containers & Microservices" Lesson

[00:00:01]
>> Jem Young: Now we're gonna talk about containers. Containers are fascinating. They are changing the landscape of servers and the way we think about how to structure architecture. A little bit ago we talked about how the architecture matters more than we think and as senior engineers there something we need to think about is architecture.

[00:00:21] And as senior full stack engineers, we want to think carefully about how our applications are structured, as in how they live on the server. So we're talking about containers because they have been changing the game across the net landscape. Let's start with a micro service. Does anybody have a good definition, who's not Sam, of a micro service.

[00:00:39] Like throw it out What does it sound like?
>> off screen female: A service that only completes one task?
>> Jem Young: Yeah, I like that. It's a teeny tiny service. If we talk about microservices, we talk about an architecture of loosely connected services. It's kind of a misnomer because just because it's a microservice doesn't mean it's really small and lightweight.

[00:01:06] So I like what you I like your definition of its it only does one thing because that's a better definition of a microservice. It only does one thing. What's the opposite of micro service architecture? Yes.
>> off screen female: A monolith?
>> Jem Young: A monolith. Yes. That is one app that does everything.

[00:01:23] Historically that's been a Java application or a node application or Python with Django, something like that, where it's all one application. There are trade offs we make when we talk about moving to microservices versus a monolith. The good thing about a monolith is that. You're all using the same language.

[00:01:45] So if you hire a bunch of Python people, we're all working in Python. They're not separate services. Security updates moving, migrating from Python two to Python three, you can do it all at one time. And it actually makes maintenance a little bit easier because you don't have a lot of disparate languages and services talking to each other.

[00:02:00] It's just one application.
>> Jem Young: What is the downside of a monolith?
>> off screen female: It's just harder to make changes and automate things. You have to push everything all at once.
>> Jem Young: Yeah.
>> off screen male: You can get away from having everything loosely coupled with microservices. You can kinda by design guarantee that everything is independent.

[00:02:30] And with a monolith, you can fall into the pattern of having one thing affect multiple other things and it's tightly coupled. And you can break things and have unintended consequences.
>> Jem Young: Yeah, it's exactly what you look like. Would you rather have a bunch of tiny houses that are easier to maintain?

[00:02:48] Or would you rather have one giant building? It depends. I won't be here to tell you, you've one or the other. I'm just here to tell you about the different types, I don't wanna politicize it too much.
>> off screen male: You talked about upgrades and keeping you stuff upgraded for security.

[00:03:06] That's harder with the monolith if you are upgrading packages, you have to upgrade it across the entire code base. Versus little, or it can be harder. It depends.
>> Jem Young: It depends. So yes and no, it's not a one size fits all and anybody that says microservices are better than monolith, monolith's better than microservice.

[00:03:29] They're both wrong because it really depends on your application, how your company structured, what type of engineers you have, things like that. As far as applications go, yeah, it might be easier to upgrade a microservice because you can upgrade them one at a time. But at the other hand, you have all the upgrades you have to do for every single microservice.

[00:03:46] You have all the security patches you need to do for every single microservice versus a monolith. You do it one time and it's all together. It can be more cumbersome but it's easier to do all at once. So again, this whole container, microservice monolith architecture, it depends. It depends on use case, it depends on your company's builds.

[00:04:04] But microservices are largely built on this concept now called containers, which we'll talk about in a second. But I can speak a little bit about microservices because I work at Netflix. This is not like, this is an example of our internal infrastructure. But these are fake names, obviously.

[00:04:22] It wouldn't tell you anything, if you knew what a field craft was. That's pretty cool, field craft, makes me sound like a spy. But Netflix is one of, I won't say one of the leaders. It makes me sound like I'm tooting my own horn. But we are one of the [LAUGH] leaders in microservices.

[00:04:41] It's something we all believe, that it's more efficient for our use cases. For instance, I work on the UI team. So I live about here. Because I'm not quite at the edge gateway. That would be Zuul, one of our open source Load balancers, proxies, does lots of things.

[00:04:59] That would be about here coming in and I work about about here. So I live on the UI layer, but behind that is this high [LAUGH] mess of complexity. But the good thing is my team runs React in a Node stack. Other people run Java, other people run Groovy, which is a flavor of Java.

[00:05:19] Other people run PHP, some people run in Python. But every one of these teams can maintain their own stack. And as long as we maintain that common API, you can kind of do whatever you want. It very much lives up to that freedom responsibility. You maintain your own stack, you maintain your own architecture.

[00:05:35] And if your team is like the world's greatest Rust programmers, write your service in Rust, which will be really but I don't need to know how it works. That's the benefit of the microservice architecture. The downside is look at this complexity instead of one thing here, which is this giant Java app or something like that.

[00:05:53] We have all these other apps that need to talk to each other and maintain connections and things like that, and we do this with this concept called. A container. Now what do you think a container is?
>> Jem Young: Yes.
>> off screen male: It's a way of partitioning your code in an environment that it can run in, with only the necessary components included.

[00:06:19]
>> Jem Young: Yes, so I'll ask you, and even a bit harder question. What's the difference between a container and a virtual machine?
>> off screen male: A virtual machine is a whole OS simulating machine. A container can be lighter weight.
>> Jem Young: Precisely, that's exactly right. Virtual machines have an entire operating system.

[00:06:44] So in the old days of the microservice architecture, I would have to run a version of Ubuntu like we're running now. And then I'd run Node. And then if I wanna run another microservice for my database, I'd run Fedora or Ubuntu with another version of Node. And that was the old days.

[00:07:00] That seemed pretty good, cuz you're like well, yeah that's a little bit heavy. But at least now every service has its own operating system and then they can only talk to other services over known APIs. Containers take that concept and trigger down even more. Instead of running out on an operating system, like every container has its own operating system, we just run just the set of libraries that it needs, and only that, nothing else.

[00:07:24] And that's what a container is. Containers are powerful in the way that they are today and we can only use them because of cloud computing and the advances that we've made. We've made advances in the hypervisor, that is the process that controls other processes, how they talk to each other.

[00:07:40] And that's how we're working right now. That's how our servers up and running, is the fact that we can do process isolation. But I'm running on this tiny, tiny, tiny sliver of the server. And honestly, you all can be running on the same server as me, I have no idea.

[00:07:54] Because we have process isolation and the hypervisor is really good at segmenting our resources to one or the other. When we build on top of that, we have this idea of containers. So instead of the VPS, we can zoom in even more and we take that concept of a server and we just segment that out into different sections.

[00:08:12] But every section only has it's own libraries, it's own resources. It doesn't necessarily even know about the operating system that it's on. It doesn't care. That's what the wrapper is for, something like the orchestration layer. Yes?
>> off screen male: I was gonna say, it's kind of like the difference, but tree shaking.

[00:08:30] Where you only include the functionality that you actually use, versus including the entire library.
>> Jem Young: Yes exactly, or as I like to say, the Columbo way of doing it. Just the facts. It's just what it needs. So rather than including an entire operating system on every single application, we'll just include the libraries that it needs.

[00:08:52]
>> Jem Young: This was the old days. This would be a virtual machine. This could be our server. Actually it's pretty close to our server right now. We have NodeJS running, we have Nginx running. We later I'll go over how to install MySQL. Won't do too much with it, but we can run Python.

[00:09:07] This was the old way of doing it. Containers are something like this. It's still one server. But now this is kind of its own micro server. And the MySQL database isn't running on its own microserver, Nginx, etc., etc. And they only have the libraries that they need. Why is this beneficial?

[00:09:26] Like what's powerful about this. We're not running an entire operating system on every single application now but what's something we can do even more when we're controlling the individual containers itself.
>> off screen male: Run several of them. Several copies.
>> Jem Young: Yeah no, you can, but we can do that virtual machines too.

[00:09:48] But you're on the right track. What else?
>> Jem Young: They are. Yes, they are faster. But there's one benefit we probably don't think of when we think of containers
>> off screen male: Static analysis.
>> Jem Young: No, what do you mean by that?
>> off screen male: Static analysis allows you to trace through the graph of what the dependency tree is and figure out what to include, what to exclude.

[00:10:20] Reduces not just resources, in terms of processing but also memory.
>> Jem Young: Yeah. Take that idea further, if we know, because you're you're absolutely right. I don't know if I call it static analysis as much but the analysis of what every single application is doing rather than as a whole, I have to run my load, run topper other commands to see like, what exactly what containers doing, or what a process is doing.

[00:10:49] When I have it in container. I know precisely what it's doing. And by doing that I can limit the amount of resources that have given container has, and I can scale up and scale down much faster. For instance, if I run a Python web server it's gonna be blocking.

[00:11:06] We talked about blocking request earlier, where if I request a really slow database query or something like that, it's gonna block all the other requests coming in unless I have some sort of pooling enabled but that'll slow down everything else. But what if I want to just limit the resources that Python can use?

[00:11:23] And I want to allocate them in Node or something like that. I can more tightly control what's happening on my server. And since it's kind of its own server in a way, and I can control that. And that's really, really powerful. And part of that power, I'll talk about the benefits even more of containers, but part of that power is like you all were saying, you can roll these out much, much faster.

[00:11:48] The startup time is much faster because I'm not starting an entire operating system, that I have to install. I need to run unattended upgrades. I need to install Nginx on every single server and all these things. I can just run the library, and I can distribute these over time.

[00:12:03] The other benefit is I can make updates to my node application, my Python application without the other services even knowing about it. So I can control the individual rollouts rather than rolling out entire servers which take a lot of time to take up and take down.
>> Jem Young: And we hit on some of the benefits already but they're lightweight.

[00:12:22] It's just the facts. It's only the things that it needs. They're portable. In that case, I can create a Node application, wrap it in container and put it anywhere I want. And it doesn't matter what the operating system is that point, it doesn't care, because there's a layer that handles that interaction with the operating system.

[00:12:41] So if I want, I can write a Node application, run it on Windows, run it on my Mac computer and then run it on Ubuntu, and it's all going to run the same, very similar to a virtual machine but without the operating system overhead. Honestly, microservices are easier for development because it's not a monolith.

[00:12:56] The application is, but like you're saying, Anna, it only does one thing. And it does one thing well. That makes reasoning about what's happening much simpler. They're easier to manage. I should put an asterisk by this. As you saw before with the,
>> Jem Young: Easier to manage is totally relative.

[00:13:15] Some might say this is harder to manage in one big sect I might say it's easier to manage. In mind they're easier to manage because it's only doing one thing and that makes rational sense and they're certain times it's gonna much faster cuz there is no boot up time anymore certain.

[00:13:29] It's just literally [INAUDIBLE] just copying some files over and hitting run. And what it really does is decouples the application from the infrastructure. It doesn't matter if we switch to Ubuntu 20 or Fedora or Red Hat or Debian or something like that. The application doesn't matter and me as a developer, I don't care.

[00:13:47] It doesn't affect my application anyway because there's a layer that does that inferring. That explain containers I find people like get hung up a little bit on them, and they confuse them with virtual machines and like monoliths and all that stuff.