Implement Windows containers

  • 10/6/2017

The containers feature virtualizes operating systems in Windows Server 2016. In this sample chapter from Exam Ref 70-740 Installation, Storage and Compute with Windows Server 2016, learn how to deploy and manage Windows containers in cooperation with Docker.

Containers are a means of rapidly deploying virtualized, isolated operating system environments, for application deployment and execution. Windows Server 2016 includes support for containers, in cooperation with an open source container engine called Docker.

Skill 4.1: Deploy Windows containers

Virtualization has been an important watchword since the early days of Windows. Virtual memory has been around for decades; Windows can use disk space to make the system seem like it has more memory than it has. Hyper-V virtualizes hardware, creating computers within a computer that seem to have their own processors, memory, and disks, when in fact they are sharing the resources of the host server. Containers is a new feature in Windows Server 2016 that virtualizes operating systems.

Determine installation requirements and appropriate scenarios for Windows containers

Just as virtual machines provide what appear to be separate computers, containers provide what appear to be separate instances of the operating system, each with its own memory and file system, and running a clean, new copy of the operating system. Unlike virtual machines, however, which run separate copies of the operating system, containers share the operating system of the host system. There is no need to install a separate instance of the operating system for each container, nor does the container perform a boot sequence, load libraries, or devote memory to the operating system files. Containers start in seconds, and you can create more containers on a host system than you can virtual machines.

To users working with containers, what they appear to see at first is a clean operating system installation, ready for applications. The environment is completely separated from the host, and from other containers, using namespace isolation and resource governance.

Namespace isolation means that each container only has access to the resources that are available to it. Files, ports, and running processes all appear to be dedicated to the container, even when they are being shared with the host and with other containers. The working environment appears like that of a virtual machine, but unlike a virtual machine, which maintains separate copies of all the operating system files, a container is sharing these files with the host, not copying them. It is only when a user or application in a container modifies a file that a copy is made in the container’s file system.

Resource governance means that a container has access only to a specified amount of processor cycles, system memory, network bandwidth, and other resources, and no more. An application running in a container has a clean sandbox environment, with no access to resources allocated to other containers or to the host.

Container images

The ability to create new containers in seconds, and the isolated nature of each container, make them an ideal platform for application development and software testing. However, there is more to them than that.

Containers are based on images. To create a new container, you download an image from a repository and run it. If you run an image of Windows Server 2016 Server Core, you get a container with a clean instance of the operating system running in it. Alternatively, you can download Windows Server images with roles or applications, such as Internet Information Services (IIS) or Microsoft SQL Server, already installed and ready to run.

The base operating system image never changes. If you install an application in the container and then create a new image, the resulting image contains only the files and settings needed to run the application. Naturally, the new image you created is relatively small, because it does not contain the entire operating system. To share the application with other people, you only have to send them the new, smaller image, as long as they already have the base operating system image.

This process can continue through as many iterations as you need, with layer upon layer of images building on that original base. This can result in an extremely efficient software development environment. Instead of transferring huge VHD files, or constantly creating and installing new virtual machines, you can transfer small container images that run without hardware compatibility issues.

Install and configure Windows Server Container Host in physical or virtualized environments

Windows Server 2016 supports two types of containers: Windows Server Containers and Hyper-V containers. The difference between the two is in the degree of container isolation they provide. Windows Server Containers operate user mode and share everything with the host computer, including the operating system kernel and the system memory.

Because of this, it is conceivable that an application, whether accidentally or deliberately, might be able to escape from the confines of its container and affect other processes running on the host or in other containers. This option is therefore presumed to be preferable when the applications running in different containers are basically trustworthy.

Hyper-V containers provide an additional level of isolation by using the hypervisor to create a separate copy of the operating system kernel for each container. Although they are not visible or exposed to manual management, Hyper-V creates virtual machines with Windows containers inside them, using the base container images, as shown in Figure 4-1. The container implementation is essentially the same; the difference is in the environments where the two types of containers exist.

FIGURE 4-1

FIGURE 4-1 Windows container architecture

Because they exist inside a VM, Hyper-V containers have their own memory assigned to them, as well as isolated storage and network I/O. This provides a container environment that is suitable for what Microsoft calls “hostile multi-tenant” applications, such as a situation in which a business provides containers to clients for running their own code, which might not be trustworthy. Thus, with the addition of Hyper-V containers, Windows Server 2016 provides three levels of isolation, ranging from the separate operating system installation of Hyper-V virtual machines, to the separate kernel and memory of Hyper-V containers, to the shared kernel and other resources of Windows Server Containers.

Installing a container host

Windows Server 2016 includes a feature called Containers, which you must install to provide container support, but to create and manage containers you must download and install Docker, the application that supports the feature.

To install the Containers feature, you can use the Add Roles And Features Wizard in Hyper-V Manager, selecting Containers on the Select Features page, as shown in Figure 4-2.

FIGURE 4-2

FIGURE 4-2 Installing the Containers feature in Hyper-V Manager

To create Hyper-V containers, you must install both the Containers feature and the Hyper-V role. Even though you will not be creating virtual machines for the containers, the Hyper-V role installs the hypervisor that will be needed to create the separate copy of the Windows kernel for each Hyper-V container.

The Hyper-V role has general hardware requirements that exceed those of the Windows Server 2016 operating system itself. Before you can install the Hyper-V role on a server running Windows Server 2016, you must have the following hardware:

  • A 64-bit processor that includes hardware-assisted virtualization and second-level address translation (SLAT). This type of virtualization is available in processors that include a virtualization option, such as Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.

  • Hardware-enforced Data Execution Prevention (DEP), which Intel describes as eXecuted Disable (XD) and AMD describes as No eXecute (NS). CPUs use this technology to segregate areas of memory for either storage of processor instructions or for storage of data. Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).

  • VM Monitor Mode extensions, found on Intel processors as VT-c.

  • A system BIOS or UEFI that supports the virtualization hardware and on which the virtualization feature has been enabled.

When you install the Hyper-V role using Hyper-V Manager, the Add Roles And Features Wizard prompts to install the Hyper-V Management tools as well. If you are creating Hyper-V containers but not Hyper-V virtual machines, there is no need to install the management tools.

Virtualizing containers

Windows Server 2016 supports the use of containers within Hyper-V virtual machines. You can install the Containers feature and the Docker files in any virtual machine. However, to create Hyper-V containers on a virtual machine, the system must meet the requirements for nested virtualization.

To create a nested Hyper-V host server, the physical host and the virtual machine on which you create the Hyper-V containers must both be running Windows Server 2016. The VM can run the full Desktop Experience, Server Core, or Nano Server installation option. In addition, the physical host must have an Intel processor with VT-x and Extended Page Tables (EPT) virtualization support.

Before you install Hyper-V on the virtual machine, you must provide its virtual processor with access to the virtualization technology on the physical computer. To do this, you must shut down the virtual machine and run a command like the following on the physical host, in a PowerShell session with administrator privileges:

set-vmprocessor -vmname server1 -exposevirtualizationextensions $true

In addition, you must make the following configuration changes on the VM that functions as a Hyper-V host. Each is given first as the location in the VM Settings dialog box in Hyper-V Manager, and then as a PowerShell command:

  • On the Memory page, provide the VM with at least 4 gigabytes (GB) of RAM and disable Dynamic Memory.

    set-vmmemory -vmname server1 -startupbytes 4gb -dynamicmemoryenabled $false
  • On the Processor page, set Number of Virtual Processors to 2.

    set-vmprocessor -vmname server1 -count 2
  • On the Network Adapter/Advanced Features page, turn on MAC Address Spoofing.

    set-vmnetworkadapter -vmname server1 -name "network adapter" -macaddressspoofing
    on

Once you have made these changes, you can start the VM, install the Hyper-V role, and proceed to use Docker to create Hyper-V containers.

Install and configure Windows Server container host to Windows Server Core or Nano Server in a physical or virtualized environment

A computer installed using the Server Core option can function as a container host. The requirements are the same as for a server installed with the full Desktop Experience, except that you must either use the command line to install the required features or manage the system remotely.

After switching to a PowerShell session, you can install the Containers feature and the Hyper-V role using the following command:

install-windowsfeature -name containers, hyper-v

Configuring Nano Server as a container host

Nano Server, included with Windows Server 2016, supports both Windows Server containers and Hyper-V containers. The Nano Server implementation includes packages supporting both the Containers feature and the Hyper-V role, which you can add when you create a Nano Server image with the New-NanoServerImage cmdlet in Windows PowerShell, as in the following example:

new-nanoserverimage -deploymenttype guest -edition datacenter -mediapath d:\ -targetpath
c:\nano\nano1.vhdx -computername nano1 -domainname contoso -containers

This command creates a Nano Server image with the following characteristics:

  • deploymenttype guest Creates an image for use on a Hyper-V virtual machine

  • edition datacenter Creates an image using the Datacenter edition of Windows Server

  • mediapath d:\ Accesses the Nano Server source files from the D drive

  • targetpath c:\nano\nano1.vhdx Creates an VHDX image file in the C:\nano folder with the name Nano1.vhdx

  • computername nano1 Assigns the Nano Server the computer name Nano1

  • domainname contoso Joins the computer to the Contoso domain

  • containers Installs the Containers feature as part of the image

  • compute Installs the Hyper-V role as part of the image

If you plan on creating Hyper-V containers on the guest Nano Server, you must provide it with access to the virtualization capabilities of the Hyper-V server, using the following procedure.

  1. Create a new virtual machine, using the Nano Server image file you created, but do not start it.

  2. On the Hyper-V host server, grant the virtual machine with access to the virtualization capabilities of the Hyper-V server’s physical processor, using a command like the following:

    set-vmprocessor -vmname nano1 -exposevirtualizationextensions $true
  3. Start the Nano Server virtual machine.

Once the Nano Server virtual machine is running, you must establish a remote PowerShell session from another computer, so you can manage it. To do this, run a command like the following on the computer you use to manage Nano Server:

enter-pssession -computername nano1 -credential

Install Docker on Windows Server and Nano Server

Docker is an open source tool that has been providing container capabilities to the Linux community for years. Now that it has been ported, you can implement those same capabilities in Windows. Docker consists of two files:

  • Dockerd.exe The Docker engine, also referred to as a service or daemon, which runs in the background on the Windows computer

  • Docker.exe The Docker client, a command shell that you use to create and manage containers

    In addition to these two files, which you must download and install to create containers, Docker also includes the following resources:

  • Dockerfiles Script files containing instructions for the creation of container images

  • Docker Hub A cloud-based registry that enables Docker users to link to image and code repositories, as well as build and store their own images

  • Docker Cloud A cloud-based service you can use to deploy your containerized applications

Installing Docker on Windows Server

Because Docker is an open source product, it is not included with Windows Server 2016. On a Windows Server 2016 Desktop Experience or Server Core computer, you must download Docker and install it before you can create containers. To download Docker, you use OneGet, a cloud-based package manager for Windows.

To access OneGet, you must install the DockerMsftProvider module, using the following command. If you are prompted to install a NuGet provider, answer Yes.

install-module -name dockermsftprovider -repository psgallery -force

The Install-Module cmdlet downloads the requested module and installs it to the C:\Program Files\Windows PowerShell\Modules folder, where it is accessible from any PowerShell prompt. Next, to download and install Docker, run the following Install-Package command. If the command prompts you to confirm that you want to install an untrusted package, answer Yes.

install-package -name docker -providername dockermsftprovider

This command, after downloading the Docker files, registers Dockerd.exe as a Windows service and adds the Docker.exe client to the path, so that it is executable from and location in the file system.

Once the installation is completed, restart the computer with the following command:

restart-computer -force

Installing Docker on Nano Server

Once you have entered a remote PowerShell session with a Nano Server computer, you can install Docker using the same commands as for a Desktop Experience or Server Core system. However, Microsoft recommends that, once the Dockerd service is installed on the Nano Server, you run the Docker client from the remote system.

To do this, you must complete the following tasks:

  1. Create a firewall rule. For the Nano Server to allow Docker client traffic into the system, you must create a new firewall rule opening port 2375 to TCP traffic. To do this, run the following command in the Nano Server session:

  2. netsh advfirewall firewall add rule name="docker daemon" dir=in action=allow
    protocol=tcp localport=2375
  3. Configure the Dockerd engine to accept network traffic. Docker has its origins in Linux, and like most Linux applications, it uses text files for configuration. To enable the Dockerd engine to accept client traffic over the network, you must create a text file called daemon.json in the C:\ProgramData\Docker directory on the Nano Server that contains the following line:

  4. { "hosts": ["tcp://0.0.0.0:2375", "npipe://"] }

    The following two PowerShell commands create the new file and insert the required text:

    new-item -type file c:\programdata\docker\config\daemon.json
    
    add-content 'c:\programdata\docker\config\daemon.json' '{ "hosts":
    ["tcp://0.0.0.0:2375", "npipe://"] }'
  5. Restart the Dockerd engine. Once you have created the daemon.json file, you must restart the Dockerd engine, using the following command:

  6. restart-service docker
  7. Download the Docker client. To Manage the Dockerd engine remotely, you must download and install the Docker.exe client on the remote system (not within the Nano Server session). To do this, you can open a browser and type in the following URL to download the Docker package:

  8. https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip
  9. To do this in PowerShell, use the following command:

  10. invoke-webrequest "https://download.docker.com/components/engine/windows-server/
    cs-1.12/docker.zip" -outfile "$env:temp\docker.zip" -usebasicparsing
  11. Install Docker.exe. If you downloaded the Docker. zip file through a browser, you install the application by extracting the Docker.exe file from the zip archive and copying it to a folder you must create called C:\ProgramData\Docker. To do this using PowerShell, run the following command:

  12. expand-archive -path "$env:temp\docker.zip" -destinationpath $env:programfiles
  13. Set the PATH environment variable. To run the Docker client from any location on the management system, you must add the C:\ProgramData\Docker folder to the system’s PATH environment variable. To do this graphically, open the System Properties sheet from the Control Panel and, on the Advanced tab, click Environment Variables to display the dialog box shown in Figure 4-3.

  14. FIGURE 4-3

    FIGURE 4-3 The Environment Variables dialog box

  15. To do this in PowerShell, run the following command:

  16. [environment]::setenvironmentvariable("path", $env:path + ";c:\program filesdocker", [environmentvariabletarget]::machine)

Once you have completed these steps, you can run the Docker.exe client outside of the Nano Server session, but you must include the following parameter in every command, where the ipaddress variable is replaced by the address of the Nano Server you want to manage:

-h tcp://ipaddress:2375

For example, to create a new container with the microsoft/nanoserver image, you would use a command like the following:

docker -h tcp://172.21.96.1:2375 run -it microsoft/nanoserver cmd

To avoid having to add the -h parameter to every command, you can create a new environment variable as follows:

docker_host = "tcp://ipaddress:2375"

To do this in PowerShell, use a command like the following:

$env:docker_host = "tcp://172.21.96.1:2375"

Configure Docker Daemon start-up options

As mentioned in the previous section, the configuration file for the Dockerd engine is a plain text file called daemon.json, which you place in the same folder as the Dockerd.exe file. In addition to the one you used earlier to permit client traffic over the network, there are many other configuration settings you can include in the file. All of the settings you include in a single daemon.json file should be enclosed in a single set of curly braces, as in the following example:

{
"graph": "d:\\docker"
 "bridge" : "none"
 "group" : "docker"
{"dns": 192.168.9.2, 192.168.9.6 }
}

Redirecting images and containers

To configure the Dockerd engine to store image files and containers in an alternate location, you include the following command in the daemon.json file, where d:\\docker is replaced by the location you want to use:

{ "graph": "d:\\docker" }

Suppressing NAT

By default, the Dockerd engine creates a network address translation (NAT) environment for containers, enabling them to communicate with each other and with the outside network. To modify this default behavior and prevent the engine from using NAT, you include the following command in the daemon.json file:

{ "bridge" : "none" }

Creating an administrative group

By default, only members of the local Administrators group can use the Docker client to control the Dockerd engine when working on the local system. In some cases, you can grant users this ability without giving them Administrators membership. You can configure Dockerd to recognize another group—in this case, the group is called “docker”—by including the following setting in the daemon.json file.

{ "group" : "docker" }

Setting DNS server addresses

To specify alternative DNS server addresses for the operating systems in containers, you can add the following setting to the daemon.json file, where address1 and address2 are the IP addresses of DNS servers:

{"dns": "address1" , "address2" }

Configure Windows PowerShell for use with containers

The Dockerd engine is supplied with a Docker.exe client shell, but it is not dependent on it. You can also use Windows PowerShell cmdlets to perform the same functions. The Docker PowerShell module, like Docker itself, is in a constant state of cooperative development, and it is therefore not included with Windows Server 2016.

You can download and install the current version of the PowerShell module from a repository called DockerPS-Dev, using the following commands:

register-psrepository -name dockerps-dev -sourcelocation https://ci.appveyor.com/nuget/
docker-powershell-dev

install-module docker -repository dockerps-dev -scope currentuser

Once the download is completed, you can view a list of the Docker cmdlets by running the following command:

get-command -module docker

The current resulting output is shown in Figure 4-4.

FIGURE 4-4

FIGURE 4-4 Cmdlets in the Docker module for Windows PowerShell

Once you have registered the repository and imported the Docker module, you do not have to run those commands again. You can always obtain the latest version of the module by running the following command:

update-module docker

Install a base operating system

With the Dockerd engine and the Docker client installed and operational, you can take the first step toward creating containers, which is to download a base operating system image from the Docker Hub repository. Microsoft has provided the repository with Windows Server 2016 Server Core and Nano Server images, which you can download and use to create containers and then build your own container images.

To use the Docker client, you execute the Docker.exe file with a command and sometimes additional options and parameters. To download an image, you run Docker with the Pull command and the name of the image. For example, the following command downloads the Server Core image from the repository.

docker pull microsoft/windowsservercore

The PowerShell equivalent is as follows:

request-containerimage -repository microsoft/windowsservercore

The output of the command (which can take some time, depending on the speed of your Internet connection) is shown in Figure 4-5.

FIGURE 4-5

FIGURE 4-5 Output of the Docker Pull command

By default, the Docker Pull command downloads the latest version of the specified image, which is identified by the tag: “latest.” When there are multiple versions of the same image available, as in an application development project, for example, you can specify any one of the previous images to download, by specifying its tag. If you run the Docker Pull command with the -a parameter, you get all versions of the image. If the image you are pulling consists of multiple layers, the command automatically downloads all of the layers needed to deploy the image in a container.

If you know that the repository has a Nano Server image, but you are not sure of its name, you can use the Docker Search command to locate it, and then use Docker Pull to download it, as shown in Figure 4-6

FIGURE 4-6

FIGURE 4-6 Output of the Docker Search command

Tag an image

Tagging, in a container repository, is a version control mechanism. When you create multiple versions of the same image, such as the successive builds of an application, Docker enables you to assign tags to them that identify the versions. Tags are typically numbers indicating the relative ages of the image iterations, such as 1.1, 1.2, 2.0, and so forth.

There are two ways to assign a tag to an image. One is to run Docker with the Tag command, and the other is to run Docker Build with the -t parameter. In both cases, the format of the image identifier is the same.

To tag an image on your local container host, you use the following syntax:

docker tag imagename:tag

If you are going to be uploading the image to the Docker Hub, you must prefix the image name with your Docker Hub user name and a slash, as follows:

docker tag username/imagename:tag

For example, a user called Holly Holt might tag the latest build of her new application as follows:

docker tag hholt/killerapp:1.5

To do the same thing in Windows PowerShell, you would use the Add-ContainerImageTag cmdlet, as follows:

add-containerimagetag -imageidorname c452b8c6ee1a -repository hholt/killerapp -tag 1.5

If you omit the tag value from the command, Docker automatically assigns the image a tag value of the word “latest,” which can lead to some confusion. When you pull an image from a repository without specifying a tag, the repository gives you the image with the “latest” tag. However, this does not necessarily mean that the image you are getting is the newest.

The “latest” tag is supposed to indicate that the image possessing it is the most recent version. However, whether that is true or not depends on the people managing the tags for that repository. Some people think that the “latest” tag is automatically reassigned to the most recent version of an image, but this is not the case. You can assign the “latest” tag to any version of an image, the oldest or the newest. It is solely up to the managers of the repository to maintain the tag values properly. When someone tells you to get the latest build of an image, is the person referring to the most recent build or the build with the “latest” tag? They are not always the same thing.

Uninstall an operating system image

Running Docker with the Images command displays all of the images on the container host, as shown in Figure 4-7.

FIGURE 4-7

FIGURE 4-7 Output of the Docker Images command

In some instances, you might examine the list of images and find yourself with images that you do not need. In this example, there are two non-English versions of Nano Server that were downloaded accidentally.

To remove images that you do not need and free up the storage space they’re consuming, you run Docker with the Rmi command and specify either the repository and tag of the specific image to delete, or the Image ID value, as in the following examples:

docker rmi -f microsoft/nanoserver:10.0.14393.206_de-de

docker rmi -f a896e5590871

The PowerShell equivalent is the Remove-ContainerImage cmdlet, as in the following:

remove-containerimage microsoft/nanoserver:10.0.14393.206_de-de

remove-containerimage a896e5590871

It is possible for the same image to be listed with multiple tags. You can tell this by the matching Image ID values. If you attempt to remove one of the images using the tag, an error appears, because the image is in use with other tags, Adding the -f parameter forces the command to delete all the tagged references to the same image.

Create Windows Server containers

With the Containers feature in place and Docker installed, you are ready to create a Windows Server container. To do this, you use the Docker Run command and specify the image that you want to run in the container. For example, the following command creates a new container with the Server Core image downloaded from Docker Hub:

docker run -it microsoft/windowsservercore powershell

In addition to loading the image into the container, the parameters in this command do the following:

  • i Creates an interactive session with the container

  • t Opens a terminal window into the container

  • powershell Executes the PowerShell command in the container session

The result is that after the container loads, a PowerShell session appears, enabling you to work inside the container. If you run the Get-ComputerInfo cmdlet in this session, you can see at the top of the output, shown in Figure 4-8, that Server Core is running in the container, when the full Desktop Experience edition in running on the container host.

FIGURE 4-8

FIGURE 4-8 Output of the Get-ComputerInfo cmdlet

You can combine Docker Run switches, so the -I and -t appear as -it. After the name of the image, you can specify any command to run in the container. For example, specifying cmd would open the standard Windows command shell instead of PowerShell.

The Docker Run command supports many command line parameters and switches, which you can use to tune the environment of the container you are creating. To display them, you can run the following command:

docker run --help

Figure 4-9 displays roughly half of the available parameters. For example, including the -h parameter enables you to specify a host name for the container, other than the hexadecimal string that the command assigns by default.

FIGURE 4-9

FIGURE 4-9 Output of the Docker Run --help command

The PowerShell equivalent of the Docker Run command uses the New-Container cmdlet, as in the following example:

new-container -imageidorname microsoft/windowsservercore -input -terminal -command
powershell

Create Hyper-V containers

The process of creating a Hyper-V container is almost identical to that of creating a Windows Server container. You use the same Docker Run command, except that you add the --isolation=hyperv parameter, as shown in the following example:

docker run -it --isolation=hyperv microsoft/windowsservercore powershell

Once you create a Hyper-V container, it is all but indistinguishable from a Windows Server container. One of the few ways to tell the types of containers apart is to examine how they handle processes. For example, you can create two containers and execute a command in each one that starts them pinging themselves continuously, as shown in the following commands:

docker run -it microsoft/windowsservercore ping -t localhost

docker run -it --isolation=hyperv microsoft/windowsservercore ping -t localhost

The Windows Server container created by the first command has a PING process running in the container, as shown by the Docker Top command in Figure 4-10. The process ID (PID) number, in this case, is 404. Then, when you run the Get-Process cmdlet, to display the processes (starting with P) running on the container host, you see the same PING process with the 404 ID. This is because the container is sharing the kernel of the container host.

FIGURE 4-10

FIGURE 4-10 Output of Docker Top and Get-Process commands for a Windows Server container

On the other hand, when you run the Docker Top command on the Hyper-V container, you again see the PING process, this time with a PID of 1852, as shown in Figure 4-11. However, the Get-Process cmdlet shows no PING process, because this container has its own kernel provided by the hypervisor.

FIGURE 4-11

FIGURE 4-11 Output of the Docker Top and Get-Process commands for a Hyper-V container