Home > Sample chapters > Windows > Windows Server

Design Microsoft Azure Infrastructure and Networking

Objective 1.3: Design Azure Compute

You can run both Windows and Linux VMs on Azure to host your workloads. You can provision a new VM easily on Azure at any time so that you can get your workload up and running without spending the time and money to purchase and maintain any hardware. After the VM is created, you are responsible for maintenance tasks such as configuring and applying software patches.

To provide the maximum flexibility in workload hosting, Azure provides a rich image gallery with both Windows-based and Linux-based images. It also provides several different series of VMs with different amounts of memory and processor power to best fit your workloads. Furthermore, Azure supports virtual extensions with which you can customize the standard images for your project needs.

Selecting VM sizes

The easiest way to create a VM is to use the management portal. You can use either the current portal (https://manage.windowsazure.com) or the new Azure Preview Management Portal (https://portal.azure.com). The following steps use the new portal.

  1. Sign in to the Preview Management Portal (http://portal.azure.com).
  2. In the lower-left corner of the screen that opens, click the New icon, and then, in the center pane, select Compute, as shown in Figure 1-8. (As of this writing, the portal is still in preview, so the exact layout and naming may change.)

    FIGURE 1-8

    FIGURE 1-8 Creating a new VM

    In the lower-left corner of Figure 1-8, at the bottom of the list, is the option for the Azure Marketplace. This Marketplace provides thousands of first-party and third-party templates for you to deploy necessary Azure resources to support various typical workloads.

  3. In this exercise, you’ll create a new Windows Server 2012 R2 VM, which happens to be the first item in the list. If the VM image is not listed, click Azure Marketplace, then click Everything, and then type in a search keyword to locate the image.
  4. On the Create VM blade (the UI panes on the new Azure Portal are called blades). Type a Host Name, a User Name, and a Password, as demonstrated in Figure 1-9.

    FIGURE 1-9

    FIGURE 1-9 Choosing price tier

    The Host Name will become the host name of your VM. Recall from Objective 1.2 that a Cloud Service with the same name will be automatically created as a container of the VM. The VM is also placed into a logical group called a Resource Group. Resource Groups are discussed when Azure Template is introduced later in this objective. The user name and the password becomes the credential of your local administrator.

  5. Click Pricing Tier, which opens a new blade where you can choose from a variety of configurations (to see the complete list, click the View All link). Click a VM size that you want to use, and then click the Select button to return to the previous blade.
  6. Optionally, click Optional Configuration to examine default settings and to make changes as needed. For example, you can choose to join the VM to a virtual network or create public endpoints using this blade.
  7. Back on the Create VM blade, scroll down to examine other options, such as which Azure subscription to use and the region to which the VM is to be deployed. Make changes as needed.
  8. Leave the Add To Starboard option selected, and then click the Create button to create the VM. After the machine is created, you’ll see a new icon on your start board (the customizable home page of the new Preview Management Portal), which provides direct access to the VM.
  9. Click the icon to open the VM blade. Click the Connect icon to establish a remote desktop connection to the VM. You’ll be asked to sign in. Sign in using the credentials you entered at step 4. You’ll also see a certificate warning. Click Yes to continue.
  10. After the connection is established, you can manage the VM just as if you were managing any servers through remote desktop.

Choosing pricing tiers and machine series

Azure provides two pricing tiers: Basic and Standard. Basic tier is most suitable for development, tests, and simple production workloads. It doesn’t have features such as load balancing and autoscaling. And, there are fewer VM sizes from which to choose. On the other hand, the Standard tier provides a wide range of VM sizes with features such as load balancing and autoscaling to support production workloads.

Azure organizes VM sizes into machine series—A-series, D-series, DS-series, and G-series. Only a part of A-series is available to the Basic tier. All series are available for the Standard tier. Following is a description of each series:

  • A-series A-series VMs are designed for generic workloads. Table 1-6 lists all available sizes in the A-series. A0 to A4 sizes are available to both the Basic tier and the Standard tier. Each VM has an operating system (OS) drive and a temporary drive. The OS drives are persistent, but the temporary drives are transient. You can attach 1-TB data drives to your VMs, as well. Each has a maximum Input/Output Operations Per Second (IOPS) of 300 for the Basic tier, and 500 for the Standard tier. With more drives, you gain more overall IOPS with parallel IO operations. Among A-series sizes, A8 through A11 are designed for high-performance computing, which is discussed in Chapter 4.

    TABLE 1-6 A-series VM sizes

    Size

    CPU cores

    Memory (GB)

    OS drive size (GB)/ temporary drive size (GB)

    Maximum number of data drives

    Maximum IOPS

    A0

    1

    768 MB

    1,023/20

    1

    1X300/1X500

    A1

    1

    1.75 GB

    1,023/40

    2

    2X300/2X500

    A2

    2

    3.5 GB

    1,023/60

    4

    4X300/4X500

    A3

    4

    7 GB

    1,023/120

    8

    8X300/8X500

    A4

    8

    14 GB

    1,023/240

    16

    16X300/16X500

    A5

    2

    14 GB

    1,023/135

    4

    4X500

    A6

    4

    28 GB

    1,023/285

    8

    8X500

    A7

    8

    56 GB

    1,023/605

    16

    16X500

    A8

    8

    56 GB

    1,023/382

    16

    16X500

    A9

    16

    112 GB

    1,023/382

    16

    16X500

    A10

    8

    56 GB

    1,023/382

    16

    16X500

    A11

    16

    112 GB

    1,023/382

    16

    16X500

  • D-series This series of VMs is designed for workloads with high processing power and high-performance temporary drives. D-series VMs use solid-state drives (SSDs) for temporary storage, providing much faster IO operations compared to what traditional hard drives provide. Table 1-7 lists all available sizes in the D-series.

    TABLE 1-7 D-series VM sizes

    Size

    CPU cores

    Memory (GB)

    OS drive size (GB)/ temporary drive size (GB)

    Maximum number of data drives

    Maximum IOPS

    Standard_D1

    1

    3.5

    1,023/50 (SSD)

    2

    2X500

    Standard_D2

    2

    7

    1,023/100 (SSD)

    4

    4X500

    Standard_D3

    4

    14

    1,023/200 (SSD)

    8

    8X500

    Standard_D4

    8

    28

    1,023/400 (SSD)

    16

    16X500

    Standard_D11

    2

    14

    1,023/100 (SSD)

    4

    4X500

    Standard_D12

    4

    28

    1,023/200 (SSD)

    8

    8X500

    Standard_D13

    8

    56

    1,023/400 (SSD)

    16

    16X500

    Standard_D14

    16

    112

    1,023/800 (SSD)

    32

    32X500

  • DS-series DS-Series VMs are designed for high I/O workloads. They use SSDs for both VM drives and a local drive cache. Table 1-8 lists all DS-series sizes.

    TABLE 1-8 DS-series VM sizes

    Size

    CPU cores

    Memory (GB)

    OS drive size (GB)/ temporary drive size (GB)

    Maximum number of data drives

    Cache Size (GB)

    Maximum IOPS/bandwidth (Mbps)

    Standard_DS1

    1

    3.5

    1,023/7 (SSD) 2

    43

    3,200/32

    Standard_DS2

    2

    7

    1,023/14 (SSD)

    4

    86

    6,400/64

    Standard_DS3

    4

    14

    1023/28 (SSD)

    8

    172

    12,800/128

    Standard_DS4

    8

    28

    1,023/56 (SSD)

    16

    344

    25,600/256

    Standard_DS11

    2

    14

    1,023/28 (SSD)

    4

    72

    6,400/64

    Standard_DS12

    4

    28

    1,023/56 (SSD)

    8

    144

    12,800/128

    Standard_DS13

    8

    56

    1,023/112 (SSD)

    16

    288

    25,600/256

    Standard_DS14

    16

    112

    1,023/224 (SSD)

    32

    576

    50,000/512

  • G-series This series of VMs is one of the biggest on cloud with Xeon E5 V3 family processors. Table 1-9 lists all available sizes in the G-series.

    TABLE 1-9 G-series VM sizes

    Size

    CPU cores

    Memory (GB)

    OS drive size (GB)/ temporary drive size (GB)

    Maximum number of data drives

    Maximum IOPS

    Standard_G1

    2

    28

    1,023/384 (SSD)

    4

    4X500

    Standard_G2

    4

    56

    1,023/768 (SSD)

    8

    8X500

    Standard_G3

    8

    112

    1,023/1,536 (SSD)

    16

    16X500

    Standard_G4

    16

    224

    1,023/3,072 (SSD)

    32

    32X500

    Standard_G5

    32

    448

    1,023/6,144 (SSD)

    64

    64X500

Using data disks

As previously mentioned, temporary drives are transient and you should not use them to maintain permanent data. If your application needs local storage to keep permanent data, you should use data drives. The Tables 1-6 through 1-9 show that for each VM size you can attach a number of data drives. You can attach both empty data drives and data drives with data to a VM. To attach a data drive, go to the Settings blade of your VM, click Disks, and then select either Attach New to create a new data drive, or Attach Existing to attach an existing data drive. Figure 1-10 shows demonstrates attaching a new data drive to a VM using the Preview Management Portal.

FIGURE 1-10

FIGURE 1-10 Attaching a data drive

After a new data drive is attached to a VM, you need to initialize it before you can use it. For Windows-based VMs, you can use the Disk Manager tool in Server Manager to initialize the drive, and then create a simple volume on it, or a striped volume across multiple drives. For Linux-based VMs, you need to use a series of commands such as fdisk, mkfs, mount, and blkid to initialize and mount the drive.

You can choose a host caching preference—None, Read Only, or Read/Write—for each data drive. The default settings usually work fine, unless you are hosting database workloads or other workloads that are sensitive to small I/O performance differences. For a particular workload, the best way to determine which preference to use is to perform some I/O benchmark tests.

Generally speaking, using striped drives usually yields better performance for I/O-heavy applications. However, you should avoid using geo-replicated storage accounts for your striped volumes because data loss can occur when recovering from a storage outage (for more information, go to https://msdn.microsoft.com/en-us/library/azure/dn790303.aspx).

Managing images

There are three sources for Azure VM: the Azure VM gallery, VM Depot, and custom images. You can use these images as foundations to create, deploy, and replicate your application run-time environments consistently for different purposes such as testing, staging, and production.

  • VM gallery The Azure VM gallery offers hundreds of VM images from Microsoft, partners, and the community at large. You can find recent Windows and Linux OS images as well as images with specific applications, such as SQL Server, Oracle Database, and SAP HANA. MSDN subscribers also have exclusive access to some images such Windows 7 and Windows 8.1. For a complete list of the images, go to http://azure.microsoft.com/en-us/marketplace/virtual-machines/.
  • VM Depot The VM Depot (https://vmdepot.msopentech.com/List/Index) is an open-source community for Linux and FreeBSD images. You can find an increasing number of images with various popular open-source solutions such as Docker, Tomcat, and Juju.
  • Custom images You can capture images of your VMs and then reuse these images as templates to deploy more VMs.

Capturing custom images

You can capture two types of images: generalized or specialized.

A generalized image doesn’t contain computer or user-specific settings. These images are ideal for use as standard templates to rollout preconfigured VMs to different customers or users. Before you can capture a generalized image, you need to run the System Preparation (Sysprep) tool in Windows, or use the waagent –deprovision command in Linux. All the OS images you see in the VM gallery are generalized. Before you can capture a generalized image, you need to shut down the VM. After the VM is captured as an image, the original VM is automatically deleted.

Specialized images, conversely, retain all user settings. You can think of specialized images as snapshots of your VMs. These images are ideal for creating checkpoints of an environment so that it can be restored to a previously known good state. You don’t need to shut down a VM before you capture specialized images. Also, the original VM is unaffected after the images are captured. If a VM is running when an image is captured, the image is in crash-consistent state. If application consistency or cross-drive capture is needed, it’s recommended to shut down the VM before capturing the image.

To capture an image, on the Virtual Machine blade, click the Capture button, as shown in Figure 1-11.

FIGURE 1-11

FIGURE 1-11 Command icons on the Virtual Machine blade

Using custom images

You can use your custom images to create new VMs just as you would use standard images. If you use a specialized image, you skip the user provisioning step because the image is already provisioned. When a new VM is created, the original VHD files are copied so that the original VHD files are not affected.

As of this writing, there’s no easy way to use custom images on the new Preview Management Portal. However, with the full management portal, you can use custom images by clicking the My Images link on the Choose An Image page of the Create A Virtual Machine Wizard, as illustrated in Figure 1-12.

FIGURE 1-12

FIGURE 1-12 Choosing image in the Create A Virtual Machine Wizard

Alternatively, you can use Azure PowerShell to create a new VM by using a custom image. For example, to create a new VM from the custom image myVMImage, use the following command:

New-AzureQuickVM –Windows –Location "West US" –ServiceName "examService" –Name "examVM"
–InstanceSize "Medium" –ImageName "myVMImage" –AdminUsername "admin"–Password "sh@ang3!"
–WaitForBoot

Managing VM states

Custom images provide basic supports for deploying workloads consistently across different environments. However, custom images have some undesirable characteristics. First, it’s difficult to revise a custom image. To make any changes, you need to provision the image as a new VM, customize it, and then recapture it. Second, it’s also difficult to track what has been changed on an image because of the manual customizations. Third, rolling out a new version is difficult, as well. To deploy a new image version, the VM needs to be re-created, making upgrade a lengthy and complex process. What you need are more light-weight, traceable, agile, and scalable state management solutions. This section discusses a number of technologies that enable efficient VM state managements.

VM extension

When you provision a new VM, a light-weight Azure Virtual Machine Agent (VM Agent) is installed on the VM by default. VM Agent is responsible for installing, configuring, and managing Azure VM Extensions (VM Extensions). VM Extensions are first-party or third-party components that you can dynamically apply to VMs. These extensions make it possible for you to dynamically customize VMs to satisfy your application, configuration, and compliance needs. For example, you can deploy the McAfee Endpoint Security extension to your VMs by enabling the McAfeeEndpointSecurity extension.

You can use Azure PowerShell cmdlet Get-AzureVMAvailableExtension to list currently available extensions. Listing 1-1 shows a sample of the cmdlet.

LISTING 1-1 Listing available VM extensions

PS C:\> Get-AzureVMAvailableExtension | Format-Table -Wrap -AutoSize -Property
ExtensionName, Description
ExtensionName                 Description
-------------                 -----------
VS14CTPDebugger               Remote Debugger for Visual Studio 2015
ChefClient                    Chef Extension that sets up chef-client on VM
LinuxChefClient               Chef Extension that sets up chef-client on VM
DockerExtension               Docker Extension
DSC                           PowerShell DSC (Desired State Configuration) Extension
CustomScriptForLinux          Microsoft Azure Custom Script Extension for Linux IaaS
BGInfo                        Windows Azure BGInfo Extension for IaaS
CustomScriptExtension         Windows Azure Script Handler Extension for IaaS
VMAccessAgent                 Windows Azure Json VMAccess Extension for IaaS
....

Custom Script Extension and DSC

Custom Script Extension downloads and runs scripts you’ve prepared on an Azure Blob storage container. You can upload Azure PowerShell scripts or Linux Shell scripts, along with any required files, to a storage container, and then instruct Custom Script Extension to download and run the scripts. The following code snippet shows a sample Azure CLI command to use the Custom Script Extension for Linux (CustomScriptForLinux) to download and run a mongodb.sh shell script:

azure vm extension set -t '{"storageAccountName":"[storage account]","storageAccount
Key":"..."}' -i '{"fileUris":["http://[storage account].blob.core.windows.net/scripts/
mongodb.sh"],"commandToExecute":"sh mongodb.sh"}' [vm name] CustomScriptForLinux
Microsoft.OSTCExtensions 1.*

Using scripts to manage VM states overcomes the shortcomings of managing them with images. Scripts are easier to change and you can apply them faster. And an added benefit is that you can trace all changes easily by using source repositories.

However, writing a script to build up a VM toward a target state is not easy. For each of the requirement components, you’ll need to check if the component already exists and if it is configured in the desired way. You’ll also need to deal with the details of acquiring, installing, and configuring various components to support your workloads. Windows PowerShell Desired State Configuration (DSC) takes a different approach. Instead of describing steps of how the VM state should be built up, you simply describe what the desired final state is with DSC. Then, DSC ensures that the final state is reached. The following is a sample DSC script that verifies the target VM has IIS with ASP.NET 4.5 installed:

Configuration DemoWebsite
{
  param ($MachineName)
  Node $MachineName
  {
    #Install the IIS Role
    WindowsFeature IIS
    {
      Ensure = "Present"
      Name = "Web-Server"
    }
    #Install ASP.NET 4.5
    WindowsFeature ASP
    {
      Ensure = "Present"
      Name = "Web-Asp-Net45"
    }
  }
}

State management at scale

For larger deployments, you often need to ensure consistent states across a large number of VMs. You also need to periodically check VM states so they don’t drift from the desired parameters. An automated state management solution such as Chef and Puppet can save you from having to carry out such repetitive and error-prone tasks.

For both Chef and Puppet, you write cookbooks that you can then apply to a large number of VMs. Each cookbook contains a number of “recipes” or “modules” for various tasks, such as installing software packages, making configuration changes, and copying files. They both facilitate community contributions (Puppet Forge and Chef Supermarket) so that you can accomplish common configuration tasks easily. For example, to get a Puppet module that installs and configures Redis, you can use Puppet tool to pull down the corresponding module from Puppet Forge:

puppet module install evenup-redis

Both Chef and Puppet install agents on your VMs. These agents monitor your VM states and periodically check with a central server to download and apply updated cookbooks. Azure provides VM extensions that bootstrap Chef or Puppet agents on your VMs. Furthermore, Azure also provides VM images that assist you in provisioning Chef and Puppet servers. Chef also supports a hosted server at https://manage.chef.io.

Managing VM states is only part of the problem of managing application run-time environments in the cloud. Your applications often depend on external services. How do you ensure that these external services remain in desired states? The solution is Azure Automation. With Automation, you can monitor events in VMs as well as external services such as Azure App Service Web Apps, Azure Storage, and Azure SQL Server. Then, workflows can be triggered in response to these events.

Automation’s cookbooks, called runbooks, are implemented as Azure PowerShell Workflows. To help you to author these runbooks, Azure has created an Azure Automation Runbook Gallery where you can download and share reusable runbooks. Figure 1-13 shows how you can create a new runbook based on existing runbooks in the gallery.

FIGURE 1-13

FIGURE 1-13 Choosing a runbook from the Runbook Gallery

Capturing infrastructure as code

Traditionally, development and operations are two distinct departments in an Independent Software Vendor (ISV). Developers concern themselves with writing applications, and the folks in operations are concerned with keeping the applications running. However, for an application to function correctly, there are always explicit or implicit requirements regarding how the supporting infrastructure is configured. Unfortunately, such requirements are often lost during communication, which leads to many problems such as service outages because of misconfigurations, frictions between development and operations, and difficulties in re-creating and diagnosing issues. All these problems are unacceptable in an Agile environment.

In an Agile ISV, the boundary between development and operations is shifting. The developers are required to provide consistently deployable applications instead of just application code; thus, the deployment process can be automated to rollout fixes and upgrades quickly. This shift changed the definition of application. An application is no longer just code. Instead, an application is made up of both application code and explicit, executable description of its infrastructural requirements. For the lack of better terminology, such descriptions can be called infrastructure code. The name has two meanings. First, “infrastructure” indicates that it’s not business logics but instructions to configure the application runtime. Second “code” indicates that it’s not subject to human interpretation but can be consistently applied by an automation system.

Infrastructure code is explicit and traceable, and it makes an application consistently deployable. Consistently deployable applications are one of the key enabling technologies in the DevOps movement. The essence of DevOps is to reduce friction so that software lifecycles can run smoother and faster, allowing continuous improvements and innovations. Consistently deployable applications can be automatically deployed and upgraded regularly across multiple environments. This means faster and more frequent deployments, reduced confusion across different teams, and increased agility in the overall engineering process.

Azure Resource Template

Azure Resource Templates are JSON files that capture infrastructure as code. You can capture all the Azure resources your application needs in a single JSON document that you can consistently deploy to different environments. All resources defined in an Azure Resource Template are provisioned within a Resource Group, which is a logical group for managing related Azure resources.

You can write an Azure Resource Template from scratch using any text editor. You can also download a template from an Azure template gallery by using Azure PowerShell:

  1. In Azure PowerShell, switch to Azure Resource Manager mode:

    Switch-AzureMode AzureResourceManager
  2. Use the Get-AzureResourceGroupGalleryTemplate cmdlet to list gallery templates. The command returns a large list. You can use the Publisher parameter to constrain the results to a specific publisher:

    Get-AzureResourceGroupGalleryTemplate -Publisher Microsoft
  3. Save and edit the template of interest:

    Save-AzureResourceGroupGalleryTemplate -Identity Microsoft.JavaCoffeeShop.0.1.3-
    preview -Path C:\Templates\JavaCoffeeShop.json
  4. At the top of the file, an Azure Resource Template contains a schema declaration (Figure 1-14). This consists of a content version number and a “resources” group, which contains resource definitions.

    FIGURE 1-14

    FIGURE 1-14 Sample Azure Template

    Optionally, you can also define parameters, variables, tags, and outputs. A complete introduction of the template language is beyond the scope of this book. You can use the Test-AzureResourceGroupTemplate cmdlet to validate your template at any time. You need an actual Resource Group in order to use the cmdlet. However, creating a Resource Group is easy:

    New-AzureResourceGroup –Name [resource group name]
  5. Supply the resource group name to the command along with other required parameters, and then validate if your template is ready to be deployed.

    To deploy a template, use the New-AzureResourceGroupDeployment cmdlet:

    New-AzureResourceGroupDeployment -Name [deployment name] -ResourceGroupName
    [resource gorup] -TemplateFile [template file] -TemplateParameterFile [parameter
    file]

An Azure Resource Template captures the entire topology of all Azure resources required by your application. And, you can deploy it with a single Azure PowerShell command. This capacity greatly simplifies resource management of complex applications, especially service-oriented architecture (SOA) applications that often have many dependencies on hosted services.

Containerization

In the past few years, container technologies such as Docker have gained great popularity in the industry. Container technologies make it possible for you to consistently deploy applications by packaging them and all their required resources together as a self-contained unit. You can build a container manually, or it can be fully described by metadata and scripts. This way, you can manage containers just as source code. You can check them in to a repository, manage their versions, and reconcile their differences just as how you would manage source code. In addition, containers have some other characteristics that make them a favorable choice for hosting workloads on cloud, which are described in the sections that follow.

Agility

Compared to VMs, containers are much more light weight because containers use process isolation and file system virtualization to provide process-level isolations among containers. Containers running on the same VM share the same system core so that the system core is not packaged as part of the container. Because starting a new container instance is essentially the same as starting a new process, you can start containers quickly—usually in time frames less than a second. The fast-start time makes containers ideal for the cases such as dynamic scaling and fast failover.

Compute density

Because container instances are just processes, you can run a large number of container instances on a single physical server or VM. This means that by using containers, you can achieve much higher compute density in comparison to using VMs. A higher compute density means that you can provide cheaper and more agile compute services to your customers. For example, you can use a small number of VMs to host a large number of occasionally accessed websites, thus keeping prices competitive. And you can schedule a larger number of time-insensitive batch jobs.

Decouple compute and resource

Another major benefit of using containers is that the workloads running in them are not bound to specific physical servers or VMs. Traditionally, after a workload is deployed, it’s pretty much tied to the server where it’s deployed. If the workload is to be moved to another server, the new one needs to be repurposed for the new workload, which usually means the entire server needs to be rebuilt to play its new role in the datacenter. With containers, servers are no longer assigned with specific roles. Instead, they form a cluster of CPUs, memory, and disks within which workloads can roam almost freely. This is a fundamental transformation in how the datacenter is viewed and managed.

Container orchestration

There are many container orchestration solutions on the market that provide container clustering, such as Docker Swarm, CoreOS Fleet, Deis, and Mesosphere. Orchestrated containers form the foundation of container-based PaaS offerings by providing services such as coordinated deployments, load balancing, and automated failover.

Orchestrated containers provide an ideal hosting environment for applications that use Microservices architecture. You can package each service instance in its own corresponding container. You can join multiple containers together to form a replica set for the service. You can automate container cluster provisioning by using a combination of Azure Resource Template, VM Extensions, Custom Script Extension, and scripts. The template describes the cluster topology, and VM extensions perform on-machine configurations. Finally, automated scripts in containers themselves can perform container-based configurations.

Scaling applications on VMs

In Azure, you can configure applications to scale-up or scale-out.

Scaling-up refers to increasing the compute power of the hosting nodes. In an on-premises datacenter, scaling up means to increase the capacity of the servers by increasing memory, processing power, or drive spaces. Scaling-up is constrained by the number of hardware upgrades you can fit into the physical machines. In the cloud, scaling-up means to choose a bigger VM size. In this case, scaling-up is constrained by what VM sizes are available.

Scaling-out takes a different approach. Instead of trying to increase the compute power of existing nodes, scaling-out brings in more hosting nodes to share the workload. There’s no theoretical limit to how much you can scale-out—you can add as many nodes as needed. This makes it possible for an application to be scaled to very high capacity that is often hard to achieve with scaling-up. Scaling-out is a preferable scaling method for cloud applications.

The rest of this section will focus on scaling out.

Load balancing

When you scale-out an application, the workload needs to be distributed among the participating instances. This is done by load balancing. (Load-balanced endpoints were introduced earlier in this chapter.) The application workload is distributed among the participating instances by the Azure public-facing load-balancer in this case.

However, for multitiered applications, you often need to scale-out middle tiers that aren’t directly accessible from the Internet. For instance, you might have a website as the presentation layer, and a number of VMs as the business layer. You usually don’t want to expose the business layer, and thus you made it accessible only by the presentation layer. How would you scale the business layer without a public-facing load balancer? To solve this problem, Azure introduces Internal Load Balancers (ILB). ILBs provide load balancing among VMs residing in a Cloud Service or a regional virtual network.

The ILBs are not publically accessible. Instead, you can access them only by other roles in the same Cloud Services, or other VMs within the same virtual network. ILB provides an ideal solution for scaling a protected middle tier without exposing the layer to the public. Figure 1-15 shows a tiered application that uses both a public-facing load balancer and an internal load balancer. With this deployment, end users access the presentation layer through Secure Sockets Layer (SSL). The requests are distributed to the presentation layer VMs by Azure Load Balancer. Then, the presentation layer accesses the database servers through an internal load balancer.

FIGURE 1-15

FIGURE 1-15 Usage of ILB

As mentioned earlier, you can define custom health probes when you define a load-balanced set. You can configure your VMs to respond to health probes from the load balancer via either TCP or HTTP. If a VM fails to respond to a given number of probes, it is considered unhealthy and taken out of the load balancer. The load balancer will keep probing all of the VMs (including the unhealthy ones) so that when the failed VM is recovered, it will automatically be rejoined to the balanced set. You can use this feature to temporarily take a VM off the load balancer for maintenance by forcing a false response to probe signals.

Autoscale

With Azure, you can scale your VMs manually in a Cloud Service. In addition, you can also set up autoscale rules to dynamically adjust the system capacity in response to average CPU usage or number of messages in a queue.

To use autoscaling, you need to add the VMs to an Availability Set. Availably Sets are discussed in Chapter 4. At the moment, you can consider an Availability Set as a group of VMs for which Azure attempts to keep at least one VM running at any given time. Figure 1-16 shows a sample autoscaling policy on the management portal.

FIGURE 1-16

FIGURE 1-16 Sample autoscaling policy

Let’s go through the above policy item by item.

  • Edit Scale Settings For Schedule You can specify different scaling policies for different times of the day, different days of the week, and specific date ranges. For example, if you are running an ecommerce site that expects spikes in traffic during weekends, you can set up a more aggressive scaling policy to ensure that the performance of the system under heavier loads during those periods.
  • Scale By Metric You can choose None, CPU, or Queue. An autoscaling policy without a scale metric is for scheduled scaling scenarios. For the latter two options, Azure monitors the performance of your VMs and adjusts the number of instances accordingly to ensure that the metric falls into the specified range.
  • Instance Range The Instance Range specifies the lower and upper boundaries of scaling. The lower boundary makes certain that the system maintains a minimum capacity, even if the system is idle. The upper boundary controls the cost limit of your deployment. Each VM instance has its associated cost. You want to set up an appropriate upper limit so that you don’t exceed your budget.
  • Target CPU The Target CPU specifies the desired range of the specific metric. If the value exceeds the upper limit, scaling up (in this case, a more precise term would be “scaling out”) will be triggered. If the value falls below the lower limit, scaling down (again, in this case a more precise term would be “scaling in”) will be triggered. Please note that the autoscaling system doesn’t respond to every metric value changes. Instead, it makes decisions based on the average value in the past hour.
  • Scale Up By You can specify how fast the system is to be scaled-out by specifying scaling steps and delays between scaling actions.
  • Scale Down By You can control how the system is scaled down. Depending on how your workload pattern changes, you might want to set an aggressive scale-down policy to de-provision the resources quickly after busy hours to reduce your costs.

Objective summary

  • Azure supports various VM sizes and a gallery of both Linux images and Windows images.
  • You can automate VM state management with Azure Automation and third-party solutions such as Chef and Puppet.
  • VM Extension and Azure PowerShell DSC automates on-machine configuration tasks.
  • DevOps requires infrastructure to be captured as code. With DevOps, an application consists of both application code and infrastructure code so that the application can be deployed consistently and rapidly across different environments.
  • Azure Resource Template captures the entire topology of your application as code, which you can manage just as you do application source code. Resource Templates are JSON files that you can edit using any text editors.
  • Containerization facilitates agility, high compute density, and decoupling of workloads and VMs. It transforms the datacenter from VMs with roles to resource pools with mobilized workloads.
  • You can use autoscale to adjust your compute capacity to achieve balance between cost and customer satisfaction.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.

  1. What VM series should you consider if you want host applications that require high-performance IO for persisted data?

    1. A-series
    2. D-series
    3. DS-series
    4. G-series
  2. How many data drives can you attach to a Standard_G5 VM (the biggest size in the series)?

    1. 8
    2. 16
    3. 32
    4. 64
  3. What’s the format of an Azure Resource Template?

    1. JSON
    2. XML
    3. YAML
    4. PowerShell
  4. Which of the following technologies can help you to manage consistent states of VMs at scale?

    1. Custom Script Extension
    2. Chef or Puppet
    3. Azure Automation
    4. Containerization