Containerization with Docker
- By Dawid Borycki
- 2/25/2026
- Why use containers?
- What is Docker?
- Running multiple containers
- Summary
This chapter provides an overview of containerization and Docker, delving deeper into the motivation for using containers in application development and deployment. You will learn how to use Docker to create and manage containers and put the People.WebApp to work.
Why use containers?
Containers have become a critical technology with many advantages over traditional approaches to managing development environments and deploying applications. Contrasting these advantages with the limitations of traditional methods will clarify why containers, and specifically Docker, have gained widespread adoption in the industry.
Imperative vs. declarative configuration
Remember the old days of preparing your development machine? You’d install every application individually, from your integrated development environment (IDE) to dependencies, in a process called imperative configuration. The state of your development machine was defined by executing a series of installation steps. If your hard drive crashed, you had to replace it and reinstall everything, a time-consuming process that left you unable to work during the restoration.
Hardware manufacturers and software companies proposed a better solution based on hard drive images. After installing all your tools, you could take a drive snapshot to create an image containing all your applications, dependencies, and data. In the event of a crash, you could quickly restore your machine using this image and dedicated software to replicate the image to a new hard drive. As you may remember from Chapter 1, “Motivation,” this method is called declarative configuration; you declare a desired system state (defined as the image), and the software automatically updates it if necessary.
Declarative configuration using drive images has two advantages:
Disaster preparedness and rapid deployment After you prepare your computer and take a snapshot of the drive, you can deploy this image to hundreds of devices in your company, even over a network, preparing for disaster and enabling rapid deployment of cloned systems.
Physical and virtual machines You can use drive images to set up environments on both physical and virtual machines. In the cloud, you can use an image to create a virtual machine (VM) instance, which can differ from the host’s operating system.
What is containerization, and how can it help you?
Given the success of disk and VM images, the next logical step was to apply this concept to application deployment. By packaging an application with all its dependencies into a container image, you can deploy the application as a container instance in local, virtual, or cloud environments. This process, known as containerization, allows for the creation and versioning of various container images to fully control deployment. Containerization packages together an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) all into a single container image. This enables you to test and deploy the containerized application as a unit to any host operating system (OS).
In practice, containerization addresses multiple challenges. For instance, if a hypothetical application requires a specific runtime like the Visual C++ redistributable, it may work fine on your development machine but fail on a customer’s machine due to missing dependencies. Containers package the application with all necessary runtimes and configurations, ensuring it functions identically everywhere.
Containers simplify application distribution by encapsulating the application, all its dependencies, configurations, and necessary data into a container image. You then distribute this image to end-users, who need to have only a container runtime installed on their machines. This ensures that everything you tested on your development machine works the same way on any other device, provided both machines use the same container runtime.
Another significant advantage of containerization is its ability to support applications or services developed using different programming languages and technologies, all running under the same container runtime. What does this mean? In modern application development, you select the right tool for each job, using dedicated tools to address specific challenges. Specifically, you might use:
Python for machine learning and AI
JavaScript, HTML, and CSS for frontends
Java, Go, PHP, or .NET for backends
Furthermore, all containers can run under the same container runtime. This is similar to how multiple virtual machines, each powered by different guest operating systems, can run simultaneously on a single host machine using the same virtualization software. This is feasible because each container image encapsulates a specific part of the entire software solution—along with all the dependencies (like runtimes), configurations, and data. After the container image is created, the container runtime does not concern itself with the programming language or technology used to develop the app or service.
To better understand software containers and containerization, think of the transportation industry. Teams pack items—cars, electronics, clothing, tools, and more—into physical containers. Trucks and container ships then transport these physical containers (Figure 3-1). Because the containers conform to standard dimensions, they’re easy to transfer from train cars to trucks to ships, which are all designed to be compatible with that standard. Importantly, the vehicles do not concern themselves with the contents of the containers, or other items.
FIGURE 3-1 An analogy between physical and software containers.
The same principle applies to software containers. The container runtime is indifferent to the programming tools used to implement an application or service packaged within a container image. You only need to adhere to specific standards when creating the container image, ensuring the container runtime can correctly initiate one or more container instances from your image as shown in Figure 3-2.
Containers vs. virtual machines
Containers are often explained using their similarity to virtual machines (VMs), as shown in Figure 3-2. While containers are somewhat akin to VMs, there are key differences: VMs require more resources because they include the entire guest operating system (OS), applications, and their dependencies. Containers, on the other hand, encapsulate only the applications and their direct dependencies, without an entire guest OS. Therefore, containers are quicker to deploy and start but provide less isolation compared to VMs. The primary goal of a container image is to ensure consistency across different deployments, eliminating the "it works on my machine" issue.
FIGURE 3-2 A comparison of the application deployment models.
Containers offer advantages over VMs in terms of horizontal application scaling. Using VMs to provision multiple instances of an application or service consumes more resources than necessary, as each instance includes the entire operating system. Modern applications, often composed of many components typically designed using microservices, allow each service to be developed independently with different, problem-specific tools and programming languages. Each service can be packaged as a container image and scaled separately, allowing for more precise adjustments to scaling based on actual traffic and application-specific needs.
For example, a typical media streaming platform might include several microservices responsible for user authentication, content browsing, media playback, and recommendation engines. Users typically spend much more time browsing or watching content than logging in. Therefore, in practice, more instances of the browsing and playback services are needed compared to the login or user profile services.
Scaling such an application using VMs would involve scaling the entire application along with the OS and all dependencies, thereby using more hardware resources than necessary. This challenge can be addressed at the app or service level by scaling using containers. In this approach, individual services are scaled horizontally, allowing, for example, for 15 instances of the media playback service and only two of the authentication service. Furthermore, these services can be deployed and updated independently. Although there is still some overhead due to the deployment of app or service dependencies in each container, this overhead is significantly smaller than that associated with VMs.

NOTE