Implement Windows containers

  • 10/6/2017

Skill 4.2: Manage Windows containers

  • Manage Windows or Linux containers using the Docker daemon

  • Manage Windows or Linux containers using Windows PowerShell

  • Manage container networking

  • Manage container data volumes

  • Manage Resource Control

  • Create new container images using Dockerfile

  • Manage container images using DockerHub repository for public and private scenarios

  • Manage container images using Microsoft Azure

Manage Windows or Linux containers using the Docker daemon

When you use the Docker Run command to create a new container, you can include the -it switches to work with it interactively, or you can omit them and let the container run in the background. Either way, you can continue to use the Docker client to manage container, either Windows or Linux.

Listing containers

To leave a PowerShell or CMD session you started in a container, you can just type the following:

exit

However, this not only closes the session, it also stops the container. A stopped container still exists on the host; it is just functionally turned off. To exit a session without stopping the container, press Ctrl+P, then Ctrl+Q.

You can display a list of all the running containers on the host by using the Docker PS command. If you add the -a (for all) switch, as in the following example, the command displays all of the containers on the host, whether running or not, as shown in Figure 4-12.

docker ps -a
FIGURE 4-12

FIGURE 4-12 Output of the Docker ps a command

Starting and stopping containers

To start a stopped container, you use the Docker Start command, as in the following example:

docker start dbf9674d13b9

You can also forcibly stop a container by using the Docker Stop command, as follows:

docker stop dbf9674d13b9

The six-byte hexadecimal string in these commands is the Container ID that Docker assigns to the container when creating it. You use this value in Docker commands to identify the container that you want to manage. This value also becomes the container’s computer name, as you can see if you run Get-ComputerInfo from within a container session.

If you run Docker PS with the --no-trunc (for no truncation) parameter, as shown in Figure 4-13, you can see that the Container ID is a 32-byte hexadecimal string, although it is far more convenient to use just the first six bytes on the command line.

FIGURE 4-13

FIGURE 4-13 Output of the Docker ps -a --no-trunc command

Attaching to containers

To connect to a session on a running container, use the Docker Attach command, as in the following example:

docker attach dbf9674d13b9

Running the command in multiple windows opens additional sessions, enabling you to work in multiple windows at once.

Creating images

If you have modified a container in any way, you can save the modifications to a new image by running the Docker Commit command, as in the following example:

docker commit dbf9674d13b9 hholt/killerapp:1.5

This command creates a new image called hholt/killerapp with a tag value of 1.5. The Docker Commit command does not create a duplicate of the base image with the changes you have made; it only saves the changes. If, for example, you use the Microsoft/windowsservercore base image to create the container, and then you install your application, running Docker Commit will only save the application. If you provide the new image to a colleague, she must have (or obtain) the base image, in order to run the container.

Removing containers

To remove a container completely, use the Docker RM command, as shown in the following example:

docker rm dbf9674d13b9

Containers must be in a stopped state before you can remove them this way. However, adding the -f (for force) switch will cause the Docker RM command to remove any container, even one that is running.

Manage Windows or Linux containers using Windows PowerShell

As mentioned earlier, the Dockerd engine does not require the use of the Docker.exe client program. Because Docker is an open source project, it is possible to create an alternative client implementation that you can use with Dockerd, and Microsoft, in cooperation with the Docker community, is doing just that in creating a PowerShell module that you can use to create and manage Docker containers.

Because the Docker module for PowerShell is under development, it does not necessarily support all of the functions possible with the Docker.exe client. However, the primary functions are there, as shown in the following sections.

Listing containers

You can display a list of all the containers on the host by running the Get-Container cmdlet in Windows PowerShell, as shown in Figure 4-14. Unlike the Docker PS command, the Get-Container cmdlet displays all of the containers on the host, whether they are running or stopped.

FIGURE 4-14

FIGURE 4-14 Output of the Get-Container cmdlet

Starting and stopping containers

When you create a container using the New-Container cmdlet, the container is not started by default. You must explicitly start it. To start a stopped container, you use the Start-Container cmdlet, as in the following example:

start-container dbf9674d13b9

You can also stop a container by simply changing the verb to the Stop-Container cmdlet, as follows:

stop-container dbf9674d13b9

Attaching to containers

To connect to a session on a running container, use the Enter-ContainerSession cmdlet, as in the following example:

Enter-containersession dbf9674d13b9

This cmdlet is also aliased as Attach-Container, enabling to reuse another command with just a verb change.

Creating images

If you have modified a container in any way, you can save the modifications to a new image by running the ConvertTo-ContainerImage cmdlet, as in the following example:

convertto-containerimage -containeridorname dbf9674d13b9 -repository hholt/killerapp
-tag 1.5

This cmdlet is also aliased as Commit-Container.

Removing containers

To remove a container, use the Remove-Container cmdlet, as shown in the following example:

remove-container dbf9674d13b9

As with the Docker RM command, containers must be in a stopped state before you can remove them. However, adding the Force switch will cause the cmdlet command to remove any container, even one that is running.

Manage container networking

Containers can access the outside network. This is easy to prove, by pinging a server on the local network or the Internet. However, if you run the Ipconfig /all command in a container session, as shown in Figure 4-15, you might be surprised at what you see.

FIGURE 4-15

FIGURE 4-15 Output of Ipconfig /all command on a container

In this example, the IP address of the network adapter in the container is 172.25.117.12/12, which is nothing like the address of the network on which the container host is located. However, if you run the Ipconfig /all command on the container host, as shown in Figure 4-16, the situation becomes clearer.

FIGURE 4-16

FIGURE 4-16 Output of Ipconfig /all command on a container host

There are two Ethernet adapters showing on the container host system. One has an IP address on the 192.168.2.0/24 network, which is the address used for the physical network to which the container host is connected. The other adapter has the address 172.25.112.1/12, which is on the same network as the container’s address. In fact, looking back at the container’s configuration, the container host’s address is listed as the Default Gateway and DNS Server address for the container. The container host is, in essence, functioning as a router between the 172.16.0.0/12 network on which the container is located and 192.168.2.0/24, which is the physical network to which the host is connected. The host is also functioning as the DNS server for the container.

If you look at another container on the same host, it has an IP address on the same network as the first container. The two containers can ping each other’s addresses, as well as those of systems outside the 172.16.0.0/12 network.

This is possible because the Containers feature and Docker use network address translation (NAT) by default, to create a networking environment for the containers on the host. NAT is a routing solution in which the network packets generated by and destined for a system have their IP addresses modified, to make them appear as though the system is located on another network.

When you ping a computer on the host network from a container session, the container host modifies the ping packets, substituting its own 192.169.2.43 address for the container’s 172,25.117.12 address in each one. When the responses arrive from the system being pinged, the process occurs in reverse.

The Dockerd engine creates a NAT network by default when runs for the first time, and assigns each container an address on that NAT network. The use of the 172.16.0.0/12 network address is also a default coded into Docker. However, you can modify these defaults, by specifying a different NAT address or by not using NAT at all.

The network adapters in the containers are, of course, virtual. You can see in the configuration shown earlier that the adapter for that container is identified as vEthernet (Container NIC 76b9f047). On the container host, there is also a virtual adapter, called vEthernet (HNS Internal NIC). HNS is the Host Network Service, which is the NAT implementation used by Docker. If you run the Get-VMSwitch cmdlet on the container host or look in the Virtual Switch Manager in Hyper-V Manager, as shown in Figure 4-17, you can see that Docker has also created virtual switch called nat. This is the switch to which the adapters in the containers are all connected. Therefore, you can see that containers function much like virtual machines, as far as networking is concerned.

FIGURE 4-17

FIGURE 4-17 Nat switch in the Virtual Switch Manager

Modifying NAT defaults

If you want to use a different network address for Docker’s NAT configuration, because you already have a network using that same address, for example, it is possible to do so. To specify an alternate address, you must use the daemon.json configuration file, as discussed earlier in the remote Docker client configuration.

Daemon.json is a plain text file that you create in the directory where the Dockerd.exe program is located. To specify an alternate NAT network address, you include the following text in the file:

{ "fixed-cidr":"192.168.10.0/24" }

You can use any network address for the NAT implementation, but to prevent address conflicts on the Internet, you should use a network in one of the following reserved private network addresses:

  • 10.0.0.0/8

  • 172.16.0.0/12

  • 192.168.0.0/16

To prevent the Dockerd engine from creating any network implementation at all, place the following text in the daemon.json file:

{ "bridge":"none" }

If you do this, you must manually create a container network, if you want your containers to have any network connectivity.

Port mapping

If you plan to run a server application in a container that must expose ports for incoming client traffic, you must use a technique called port mapping. Port mapping enables the container host, which receives the client traffic, to forward the packets to the appropriate port in the container running the application. To use port mapping, you add the -p switch to the Docker Run command, along with the port numbers on the container host and the container, respectively, as in the following example:

docker run -it -p 8080:80 microsoft\windowsservercore powershell

In this example, any traffic arriving through the container host’s port 8080 will be forwarded to the container’s port 80. Port 80 is the well-known port number for web server traffic, and this arrangement enables the container to use this standard port without monopolizing it on the container host, which might need port 80 for its own web server.

Creating a transparent network

Instead of using NAT, you can choose to create a transparent network, one in which the containers are connected to the same network as the container host. If the container host is a physical computer, the containers are connected to the physical network. If the container host is a virtual machine, the containers are connected to whatever virtual switch the VM uses.

Docker does not create a transparent network by default, so you must create it, using the Docker Network Create command, as in the following example:

docker network create -d transparent trans

In this example, the command creates a new network using the transparent driver, signified by the -d switch, and assigns it the name trans. Running the following command displays a list of all the container networks, which now includes the trans network you just created, as shown in Figure 4-18.

docker network ls
FIGURE 4-18

FIGURE 4-18 Output of the Docker Network LS command

Once you have created the transparent network, you can create containers that use it by adding the network parameter to your Docker Run command, as in the following example:

docker run -it --network=trans microsoft/windowsservercore powershell

When you run the Ipconfig /all command in this container, you can see that it has an IP address on the 10.0.0.0/24 network, which is the same as the network used by the virtual machine functioning as the container host.

When you create a transparent network and the containers that use it, they all obtain IP addresses from a DHCP on the container host network, if one is available. If there is no DHCP server available, however, you must specify the network address settings when creating the network and manually configure the IP address of each container by specifying it on the Docker Run command line.

To create a transparent network with static IP addresses, you use a command like the following:

docker network create -d transparent --subnet=10.0.0.0/24 --gateway=10.0.0.1 trans

Then, to create a container with a static IP address on the network you created, you use a Docker Run command like the following:

docker run -it --network=trans --ip=10.0.0.16 --dns=10.0.0.10 microsoft/
windowsservercore powershell

Manage container data volumes

In some instances, you might want to preserve data files across containers. Docker enables you to do this by creating data volumes on a container that correspond to a folder on the container host. Once created, the data you place in the data volume on the container is also found in the corresponding folder on the container host. The opposite is also true; you can copy files into the folder on the host and access them in the container.

Data volumes persist independent of the container. If you delete the container, the data volume remains on the container host. You can then mount the container host folder in another container, enabling you to retain your data through multiple iterations of an application running in your containers.

To create a data volume, you add the -v switch to a Docker Run command, as in the following example:

docker run -it -v c:\appdata microsoft/windowsservercore powershell

This command creates a folder called c:\appdata in the new container and links it to a subfolder in C:\ProgramData\docker\volumes on the container host. To learn the exact location, you can run the following command and look in the Mounts section, as shown in Figure 4-19.

docker inspect dbf9674d13b9
FIGURE 4-19

FIGURE 4-19 Partial output of the Docker Inspect command

The Mounts section (which is small part of a long, comprehensive listing of the container’s specifications) contains Source and Destination properties. Destination specifies the folder name in the container, and Source is the folder on the container host. To reuse a data volume, you can specify both the source and destination folders in the Docker Run command, as in the following example:

docker run -it -v c:\sourcedata:c:\appdata microsoft/windowsservercore powershell

If you create a data volume, specifying a folder on the container that already contains files, the existing contents are overlaid by the data volume, but are not deleted. Those files are accessible again when the data volume is dismounted.

By default, Docker creates data volumes in read/write mode. To create a read-only data volume, you can add :ro to the container folder name, as in the following example:

docker run -it -v c:\appdata:ro microsoft/windowsservercore powershell

Manage resource control

As noted earlier, the Docker Run command supports many parameters and switches, some of which have already been demonstrated in this chapter. For example, you have seen how the it switches create an interactive container that runs a specific shell or other command. To create a container that runs in the background—in what is called detached mode—you use the -d switch, as in the following example:

docker run -d -p 80:80 microsoft/iis

To interact with a detached container, you can use network connections or file system shared. You can also connect to the container using the Docker Attach command.

Working with container names

By default, when you create a container using the Docker Run command, the Dockerd engine assigns three identifiers to the container, as shown in Figure 4-20:

  • Long UUID A 32-byte hexadecimal string, represented by 64 digits, as in the following example: 0e38bdac48ca0120eff6491a7b9d1908e65180213b2c1707b924991ae8d1504f

  • Short UUID The first six bytes of the long UUID, represented as 12 digits, as in the following example: 0e38bdac48ca.

  • Name A randomly chosen name consisting of two words separated by an underscore character, as in the following example: drunk_jones

FIGURE 4-20

FIGURE 4-20 Output of the Docker ps --no-trunc command

You can use any of the three identifiers when referencing the container on the command line. You can also assign your own name to the container when you create it by adding the name parameter to the Docker Run command line, as in the following example:

docker run -it microsoft/windowsservercore powershell --name core1

Constraining memory

The Docker Run command supports parameters that enable you to specify how much memory a container is permitted to use. By default, container processes can use as much host memory and swap memory as they need. If you are running multiple containers on the same host or a memory intensive application on the host itself, you might to impose limits on the memory certain containers can use.

The memory parameters you can use in a Docker Run command are as follows:

  • -m (or --memory) Specifies the amount of memory the container can use. Values consist of an integer and the unit identifier b, k, m, or g (for bytes, kilobytes, megabytes, or gigabytes, respectively).

  • -memory-swap Specifies the total amount of memory plus virtual memory that the container can use. Values consist of an integer and the unit identifier b, k, m, or g.

  • -memory-reservation Specifies a soft memory limit that the host retains for the container, even when there is contention for system memory. For example, you might use the -m switch to set a hard limit of 1 GB, and a memory reservation value of 750 MB. When other containers or processes require additional memory, the host might reclaim up to 250 MB of the container’s memory, but will leave at least 750 MB intact. Values consist of an integer smaller than that of the m or --memory-swap value and the unit identifier b, k, m, or g.

  • -kernel-memory Specifies the amount of the memory limit set using the -m switch that can be used for kernel memory. Values consist of an integer and the unit identifier b, k, m, or g.

  • -oom-kill-disable Prevents the kernel from killing container processes when an out of memory error occurs. Never use this option without the -m switch, to create a memory limit for the container. Otherwise, the kernel could start to kill processes on the host when an OOM error occurs.

Constraining CPU cycles

You can also specify parameters that limit the CPU cycles allocated to a container. By default, all the containers on a host share the available CPU cycles equally. Using these parameters, you can assign priorities to the containers, which take effect when cpu contention occurs.

The Docker Run parameters that you can use to control container access to CPUs are as follows:

  • -c (or --cpu-shares) Specifies a value from 0 to 1024 that specifies the weight of the container in contention for the CPU cycles. The actual amount of processor cycles that a container receives depends on the number of containers running on the host and their respective weights.

  • -cpuset-cpus Specifies which CPUs in a multiprocessor host system that the container can use. Values consist of integers representing the CPUs in the host computer, separated by commas.

  • -cpuset-mems Specifies which nodes on a NUMA host that the container can use. Values consist of integers representing the CPUs in the host computer, separated by commas.

Create new container images using Dockerfile

If you have made changes to a container since you first created it with the Docker Run command, you can save those changes by creating a new container image using Docker Commit. However, the recommended method for creating container images is to build them from scratch using a script called a dockerfile.

A dockerfile is a plain text file, with the name dockerfile, which contains the commands needed to build your new image. Once you have created the dockerfile, you use the Docker Build command to execute it and create the new file. The dockerfile is just a mechanism that automates the process of executing the steps you used to modify your container manually. When you run the Docker Build command with the dockerfile, the Dockerd engine runs each command in the script by creating a container, making the modifications you specify, and executing a Docker Commit command to save the changes as a new image.

A dockerfile consists of instructions, such as FROM or RUN, and a statement for each instruction. The accepted format is to capitalize the instruction. You can insert remarks into the script by preceding them with the pound (#) character.

An example of a simple dockerfile is as follows:

#install DHCP server
FROM microsoft/windowsservercore
RUN powershell -command install-windowsfeature dhcp -includemanagementtools
RUN powershell -configurationname microsoft.powershell -command add-dhcpserverv4scope
-state active -activatepolicies $true -name scopetest -startrange 10.0.0.100 -endrange
10.0.0.200 -subnetmask 255.255.255.0
RUN md boot
COPY ./bootfile.wim c:/boot/
CMD powershell

In this example:

  • The FROM instruction specifies the base image from which the new image is created. In this case, the new image starts with the microsoft/windowsservercore image.

  • The first RUN command opens a PowerShell session and uses the Install-WindowsFeature cmdlet to install the DHCP role.

  • The second RUN command uses the Add-DhcpServerv4Scope cmdlet to create a new scope on the DHCP server.

  • The third RUN command creates a new directory called boot.

  • The COPY command copies a file called bootfile.wim from the current folder on the container host to the c:\boot folder on the container.

  • The CMD command opens a PowerShell session when the image is run.

Once you have created the dockerfile script, you use the Docker Build command to create the new image, as in the following example:

docker build -t dhcp .

This command reads the dockerfile from the current directory and creates an image called dhcp. As the Dockerd engine builds the image, it displays the results of each command and the IDs of the interim containers it creates, as shown in Figure 4-21. Once you have created the image, you can then create a container from it using the Docker Run command in the usual manner.

FIGURE 4-21

FIGURE 4-21 Output of the Docker Build command

This is a simple example of a dockerfile, but they can be much longer and more complex.

Manage container images using DockerHub Repository for public and private scenarios

DockerHub is a public repository that you can use to store and distribute your container images. When you download container images using the Docker Pull command, they come from DockerHub by default, unless you specify another repository in the command. However, you can upload images as well, using the Docker Push command.

Uploading images to DockerHub enables you to share them with your colleagues, and even with yourself, so you don’t have to transfer files manually to deploy a container image on another host.

Before you can upload images to the Docker Hub, you must register at the site at http://hub.docker.com. Once you have done this, your user name becomes the name of your repository on the service. For example, the microsoft/windowsservercore image you pulled earlier is an image called windowsservercore in the Microsoft repository. If your user name on DockerHub is hholt, your images will all begin with that repository name, followed by the image name, as in the following example:

hholt/nano1

Once you have an account, you must login to the DockerHub service from the command line before you can push images. You do this with the following command:

docker login

Docker prompts you for your user name and password, and then provides upload access to your repository.

Searching for images

You can search for images on the DockerHub by using the web site, as shown in Figure 4-22. This interface provides the latest information about the image, as well as comments from other users in the Docker community.

FIGURE 4-22

FIGURE 4-22 Screen capture of a DockerHub web search

You can also search the DockerHub from the command line, using the Docker Search command, as in the following example:

docker search microsoft --no-trunc

Adding the no-trunc parameter prevents the command from truncating the image descriptions, as shown in Figure 4-23.

FIGURE 4-23

FIGURE 4-23 Output of the Docker Search command

Pushing images

To upload your own images to the repository, you use the Docker Push command, as in the following example:

docker push hholt/nano1

By default, the Docker Push command uploads the specified image to your public repository on the DockerHub, as shown in Figure 4-24. Anyone can access images pushed in this way.

FIGURE 4-24

FIGURE 4-24 Output of the Docker Push command

Because Docker is open source software, sharing images and code with the community is a large part of the company’s philosophy. However, it is also possible to create private repositories, which you can share with an unlimited number of collaborators you select. This enables you to use DockerHub for secure application development projects or any situation in which you do not want to deploy an image to the public. DockerHub provides a single private repository as part of its free service, but for additional repositories, you must purchase a subscription.

In addition to storing and providing images, DockerHub provides other services as well, such as automated builds. By uploading a dockerfile and any other necessary files to a repository, you can configure DockerHub to automatically execute builds for you, to your exact specifications. The code files are available to your collaborators, and new builds can occur whenever the code changes.

Manage container images using Microsoft Azure

In addition to creating containers locally, you can also use them on Microsoft Azure. By creating a Windows Server 2016 virtual machine on Azure, you can create and manage containers just as you would on a local server. Azure also provides the Azure Container Service (ACS), which enables you to create, configure, and manage a cluster of virtual machines, configured to run container-based applications using various open source technologies.

Microsoft Azure is a subscription-based cloud service that enables you to deploy virtual machines and applications and integrate them into your existing enterprise. By paying a monthly fee, you can create a Windows Server 2016 virtual machine, as shown in Figure 4-25. Once you have created the virtual machine, you can install the Containers feature and the Docker engine. Containers and images that you create on an Azure virtual machine are completely compatible with the Docker implementations on your local computers.

FIGURE 4-25

FIGURE 4-25 Microsoft Azure Resource Center