Neon
Course Description
Containers are the industry standard for developing and deploying applications. Starting from scratch, you'll build containers by hand to demystify how they work. Then you’ll build Docker images for a Node.js application and add optimizations like multi-stage builds and faster rebuilds with layers. You'll even get an intro to Kubernetes, preparing you for large-scale deployments!
This course and others like it are available as part of our Frontend Masters video subscription.
Preview
CloseCourse Details
Published: August 6, 2024
Learning Paths
Topics
Learn Straight from the Experts Who Shape the Modern Web
Your Path to Senior Developer and Beyond
- 200+ In-depth courses
- 18 Learning Paths
- Industry Leading Experts
- Live Interactive Workshops
Table of Contents
Introduction
Section Duration: 6 minutes
- Brian Holt begins the course by sharing the course website and discussing the target audience. Containers have become a critical tool for web developers and continue to grow in popularity. The course repository contains the project files required to complete the exercises throughout the course.
Crafting Containers by Hand
Section Duration: 1 hour, 5 minutes
- Brian explains containers and why they are needed. Containers offer many of the security and resource-management features of virtual machines but without the cost of running a whole other operating system.
- Brian introduces chroot which creates a new process and "jails" it to a directory. With the new root directory set, the process cannot access anything outside that directory, including binary executables. Any required executables must be copied into the new root directory to be run.
- Students are instructed to add the "cat" binary to the "my-new-root" directory. Any library dependencies should be copied over as well.
- Brian explains that namespaces allow processes to be hidden from other processes. Using the unshare command, the user has its processing namespaced to the environment and can't see "outside" that root.
- Brian introduces cgroups and explains how they limit the resources that an environment can use. A sandbox cgroup is created, and the process of the unshared environment is added to the group. The default cgroup configurations are also added to the sandbox.
- Brian demonstrates how one container can use all available resources. The cgroup is then configured to limit the RAM and number of processes. The resource limits are then tested to verify the limits are in place.
Docker
Section Duration: 40 minutes
- Brian explains that images are premade containers. Docker Hub is an online repository with thousands of images you can download as a starting point for a project or to experiment with a new framework, tool, or environment. An example image is downloaded from Docker Hub and run in a container to highlight how images can be used without Docker.
- Brian demonstrates some of the common commands used when running images with Docker. The commands include run, attach, ps, and kill. Interactive, detached, and name flags are also demonstrated.
- Brian runs images for distributions of JavaScript runtimes, including Node, Deno, and Bun. He also shares the Docker commands for other environments, including Ruby, Go, Rust, PHP, and Python. The benefit of these images is the ability to get an environment up and running quickly for experimentation without following complex installation instructions.
- Brian explains that tags allow you to run different versions of the same container. Omitting a tag during a docker run command implicitly applies the "latest" tag. The lesson also demonstrates additional features of the Docker CLI.
Dockerfiles
Section Duration: 43 minutes
- Brian introduces Dockerfiles, which allow you to outline how a container will be built. Each line in a Docker file is a new directive for changing your Docker container. A basic Node 20 image is created with a command that executes a console.log statement when the container is run.
- Brian builds a container for a Node.js application. The server JavaScript file is added to the container using the COPY command in the Dockerfile. When the container is run, the port for the server can be exposed to the host computer using the publish flag.
- Brian demonstrates how to organize files inside a container. A best practice is to create a user and place files inside a directory owned by that user. This ensures the root user is not responsible for starting any processes since that could lead to security issues.
- Brian adds npm dependencies to the Node.js application and modifies the Dockerfile to install them before starting the server. A .dockerignore, which behaves like a .gitignore, is created to prevent unwanted files from being added to the image.
- Brian demonstrates how layers allow for faster rebuilds of an image. Commands like adding Node or installing dependencies can be skipped when the only change is a source code modification. Layers will pull the other files and data from cache and only rebuild the necessary changes.
Making Tiny Containers
Section Duration: 39 minutes
- Brian discusses the advantages of making containers as small as possible. Then, the Node.js app container is reduced by 80% by switching the Linux operating system from Debian to Alpine.
- Brian attempts to make the container smaller by creating a custom Alpine container for the Node.js application. The resulting container was 80MB smaller than the node-alpine container available in Docker Hub.
- Brian demonstrates how multi-stage builds can reduce the size of a container. In this case, npm is used in the initial build stage, and the installed Node modules are copied into the container during the second stage. The npm dependency is removed from the container since it's not needed once the modules are installed.
- Brian modifies the Node.js app container to use Google's Distroless Linux distribution. Distroless images contain only your application and its runtime dependencies. They do not contain package managers, shells, or any other programs you would expect to find in a standard Linux distribution.
- Students are instructed to create a multi-stage Dockerfile that builds an Astro project in one container and then serves it from a different container using NGINX.
- Brian walks through the solution to the Static Asset Project exercise.
Docker Features
Section Duration: 42 minutes
- Brian shares a new feature, Docker Scout, which scans containers for vulnerabilities. By analyzing images, Docker Scout compiles an inventory of components, also known as a Software Bill of Materials (SBOM). The SBOM is matched against a continuously updated vulnerability database to pinpoint security weaknesses.
- Brian explains when you use a bind mount, a file or directory on the host machine is mounted into a container. Unlike a volume, the file or directory with a bind mount is referenced by its absolute path on the host machine. Bind mounts are useful for sharing local state between your operating system and containers.
- Brian discusses volumes and compares them with bind mounts. Volumes are designed when an application or container state needs to persist between runs. A Node.js application that saves a data file is added to a container. The data file persists with an incremented value after each run.
- Brian explains that dev containers provide separate tools, libraries, or runtimes that are needed to work with a codebase. They are useful for complex development environments and ensuring each team members have a consistent toolchain.
- Brian creates a Mongo database container and uses the Docker network feature to create a network for the Node.js application container to access the database. The bridge driver makes the network. The container name is used in the database connection string.
Multi Container Projects
Section Duration: 52 minutes
- Brian introduces Docker Compose, a tool for defining and running multi-container applications. It's ideal for more local or production environments requiring orchestration between several containers. A tool like Kubernetes may be better if a project scales beyond more than a few containers.
- Brian uses Docker Compose to create a project with three container servers: An API, a database, and a static Nginx web server. Docker Compose builds and runs each container. The nodemon package is added to the project, so the server will restart from within the container when the server code is modified.
- Brian introduces Kubernetes, an open-source system for automating containerized application deployment, scaling, and management. Some fundamental concepts are explained, such as control planes, nodes, pods, services, and deployments. The kubectr installation process is also discussed.
- Brian migrates the project to Kubernetes using Kompose, a tool for converting Docker Compose projects to container orchestrators such as Kubernetes. The docker-compose.yml configuration file is modified to work with Kompose and the Kubernetes configuration files are generated. The kubectl utility is used to run all three services.
- Brian demonstrates the scaling capabilities within Kubernetes. Each service is scaled and the status is checked with "kubectl get all". The load balancer will manage traffic to each service replica.
- Brian discusses Docker alternatives, including Buildah, Podman, Colima, and rkt. He also compares container orchestration tools like Apache Mesos, Docker Swarm, OpenShift, Nomad, and Rancher. The lesson also includes a discussion on tools for managing secrets.
Wrapping Up
Section Duration: 3 minutes
- Brian wraps up the course by summarizing the topics discussed. Containers continue to grow in popularity, and having a fundamental understanding of containers and container orchestration increases developer productivity and efficiency.
Learn Straight from the Experts Who Shape the Modern Web
- In-depth Courses
- Industry Leading Experts
- Learning Paths
- Live Interactive Workshops