Azure network security

  • 10/4/2016

Azure networking has a lot of moving parts. This chapter from Microsoft Azure Security discusses components from a security perspective, best practices, and some patterns that you might want to adopt for your own deployments.

To understand Microsoft Azure network security, you have to know all the pieces and parts that are included. That means this chapter begins with a description and definition of all the features and services related to Azure networking that are relevant to security. For each feature, the chapter describes what it is and provides some examples to help you understand what the feature does and why it’s good (or bad) at what it does. Some capabilities in Azure networking don’t have a security story to tell, so the chapter leaves out those capabilities.

After the groundwork is laid and you have a better understanding of Azure networking, the chapter discusses Azure security best practices. These best practices are a compilation of things that you should do regarding Azure network security if they are appropriate to your deployment.

The chapter ends with a description of some useful patterns that you might want to use as reference implementation examples on which you can build your own solutions.

The goal of this chapter is to help you understand the “what’s” and “why’s,” because if you don’t understand those, you’ll never get to the how’s; if you implement the “how’s” without understanding the “what’s” and the “why’s,” you’ll end up with the same “it sort of grew that way” network that you might have on-premises today. (If your network isn’t like that, consider yourself exceptionally wise or lucky.)

To summarize, the chapter:

  • Discusses the components of Azure networking from a security perspective.

  • Goes over a collection of Azure networking best practices.

  • Describes some Azure network security patterns that you might want to adopt for your own deployments.

One more thing before you venture into the inner workings of Azure networking: If you’ve been with Azure for a while, you’re probably aware that Azure started with the Azure Service Management (ASM) model for managing resources. Even if you haven’t been around Azure since the beginning, you’re probably aware of the “old” and “new” portals (the “old” portal is now called the “classic” portal and the new portal is called the “Azure portal”). The classic portal uses the ASM model. The new portal uses the resource management model known as Azure Resource Management. This chapter focuses only on the Azure Resource Management model and the networking capabilities and behavior related to this model. The reason for this is that the ASM model is being phased out and there is no future in it, so it would be best to migrate your ASM assets (if you have any) to the new Azure Resource Management model.

Anatomy of Azure networking

Azure networking has a lot of moving parts, and figuring out what these different parts do can be intimidating. The networking documentation on focuses on the names of the products, and unfortunately these product names do not always make it easy for you to intuit the functionality of the product or feature. (Of course, this isn’t just an Azure networking problem; you can go to any major cloud service provider’s site and be assailed with the same problem.)

For this reason, this section is broken down into headings that focus on the capability you’re interested in. For example, instead of providing the product name “Azure ExpressRoute” (which is explained later in detail), the heading for that networking capability is “Cross-premises connectivity.” Because most people in networking know what that is, you don’t need to try to figure it out from a product name. This format should help you understand what Azure has to offer in the networking arena.

This section describes the following Azure networking capabilities:

  • Virtual network infrastructure

  • Network access control

  • Routing tables

  • Remote access

  • Cross-premises connectivity

  • Network availability

  • Network logging

  • Public name resolution

  • Network security appliances

  • Reverse proxy

Virtual network infrastructure

Before getting into the Microsoft Azure Virtual Network itself, you should know that all servers that you deploy in Azure are actually virtual machines (VMs). This is important to understand, because some people new to the cloud might think that a public cloud service provider like Microsoft offers dedicated hardware servers as a service.

With the understanding that you use VMs to host servers in Azure, the question is, how do those VMs connect to a network? The answer is that VMs connect to an Azure Virtual Network.

Azure Virtual Networks are similar to virtual networks that have virtualization platform solutions, such as Microsoft Hyper-V or VMware. Hyper-V is used in Azure, so you can take advantage of the Hyper-V virtual switch for networking. You can think of the Hyper-V virtual switch as representing a virtual network interface that a VM’s virtual network interface connects to.

One thing that might be different than what you use on-premises is how Microsoft isolates one customer’s network from another customer’s network. On-premises, you might use different virtual switches to separate different networks from each other, and that’s perfectly reasonable. You can do that because you control the entire network stack and the IP addressing scheme on your network, in addition to the entire routing infrastructure. In Azure, Microsoft can’t give each customer that level of control because Microsoft needs to reuse the same private IP address space among all the different customers, and Microsoft can’t tell each customer which segment of the private IP address space to use for their VMs.

To get around this challenge, Microsoft takes advantage of the Windows Server software-defined networking stack—also known as “Hyper-V Network Virtualization” (HNV). With HNV, Microsoft can isolate each customer’s network from other customer networks by encapsulating each customer’s network communications within a generic routing encapsulation (GRE) head that contains a field that is specifically for the customer. This effectively isolates each customer’s network from the others, even if different customers are using the same IP address schemes on their Azure Virtual Networks.

Azure Virtual Network provides you with the following basic capabilities:

  • IP address scheme

  • Dynamic Host Configuration Protocol (DHCP) server

  • Domain Name System (DNS) server

IP address scheme

Azure Virtual Networks require you to use private IP addresses (RFC 1918) for VMs. The address ranges are:

  • Class A:

  • Class B:

  • Class C:

You should create an Azure Virtual Network before you create a VM, because all VMs need to be placed on an Azure Virtual Network. Just like with on-premises networking, you should carefully consider which IP address scheme you want to use, especially if you think you will connect your Azure Virtual Network to your on-premises network. In that scenario, you should make sure there is no overlap between the IP addresses you use on-premises and those you want to use on an Azure Virtual Network.

When you create an Azure Virtual Network, you’ll typically choose a large block (or the entire Class A, B, or C range in the preceding list). Then you’ll subnet that range, just as you do on-premises.

From a security perspective, you should think about how many subnets you need and how large to make them, because you’ll want to create access controls between them. Some organizations use subnets to define security zones, and then create network access controls between the subnets by using Network Security Groups (which is explained later) or a virtual appliance.

Another type of addressing you should consider is public addresses. When you create a VM, a public address is assigned to that VM. Note that the public address isn’t bound to the actual network interface (although it might appear that way when you see the description in the portal or read the documentation). The public IP address is the address that external users or devices can use to connect to the VM from over the Internet.

Similar to the IP addresses that are actually bound to the network interfaces on the VM itself (explained in the next section), you can assign either a dynamic or static public IP address to a VM.

Dynamic IP addresses on a public interface aren’t as much of a problem as they might be on the internal network—that is to say, on the Azure Virtual Network itself. The reason for this is that DNS is used for Internet name resolution, and few (if any) users or devices are dependent on a static IP address to reach an Internet-reachable resource.

However, there might be situations where you need to use a static IP address on the Internet. For example, you might have network security devices that have access controls so that specific protocols or source IP addresses are allowed access only to specific IP addresses in Azure. When that is the case, you should take advantage of static public IP addresses.

Other scenarios where static public IP addresses might be used include the following:

  • You’ve deployed applications that require communications to an IP address instead of a DNS name.

  • You want to avoid having to remap DNS entries for publicly accessible resources on an Azure Virtual Network.

  • Applications deployed on Azure or other public or private cloud networks need to use static addresses to communicate with your services on an Azure Virtual Network.

  • You use SSL certificates that are dependent on a static IP address.

DHCP servers

After you create an Azure Virtual Network and then place a VM on the network, the VM needs to have an IP address assigned to it to communicate with other VMs on the Azure Virtual Network (in addition to communicating to on-premises resources and even the Internet).

You can assign two types of IP addresses to VMs:

  • Dynamic addresses

  • Static addresses

Both types of addresses are managed by an Azure DHCP server.

Dynamic addresses are typically DHCP addresses that are assigned and managed by the Azure DHCP server. Like any other DHCP-assigned address, the VM’s address is assigned from the pool of addresses defined by the address space you chose for your Azure Virtual Network.

In most cases, the address won’t change over time and you can restart the VM and it will keep the same IP address. However, there might be times when the VM needs to be moved to another host in the Azure fabric, and this might lead to the IP address changing. If you have a server that requires a permanent IP address, then do not use dynamic addressing for that VM.

For VMs that perform roles requiring a static IP address, you can assign a static IP address to the VM. Keep in mind that you do not configure the NIC within the VM to use a static IP address. In fact, you should never touch the NIC configuration settings within a VM. All IP addressing information should be configured within the Azure portal or by using PowerShell Remoting in Azure.

Examples of VMs that might need dedicated addresses include:

  • Domain controllers.

  • Anything that needs a static address to support firewall rules you might configure on an Azure Virtual Network appliance.

  • VMs that are referenced by hard-coded settings requiring IP addresses.

  • DNS servers you deploy on an Azure Virtual Network (discussed in the next section).

Keep in mind that you cannot bring your own DHCP server. The VMs are automatically configured to use only the DHCP server provided by Azure.

DNS servers

You can use two primary methods for name resolution on an Azure Virtual Network:

  • Azure DNS server

  • Your own DNS server

When you create an Azure Virtual Network, you get a simple DNS server in the bargain, at no extra charge. This simple DNS server service provides you with basic name resolution for all VMs on the same Azure Virtual Network. Name resolution does not extend outside of the Azure Virtual Network.

The simple Azure Virtual Network DNS is not configurable. You can’t create your own A records, SRV records, or any other kind of record. If you need more flexibility than simple name resolution, you should bring your own DNS server.

You can install your own DNS server on an Azure Virtual Network. The DNS server can be a Microsoft standalone DNS server, an Active Directory–integrated DNS server, or a non–Windows-based DNS server. Unlike the situation with DHCP servers on an Azure Virtual Network, you are encouraged to deploy your own DNS servers if you need them.

The bring-your-own-device (BYOD) DNS server is commonly used when you want to create a hybrid network, where you connect your on-premises network with your Azure Virtual Network. In this way, VMs are able to resolve names of devices on your on-premises network, and devices on your on-premises network are able to resolve names of resources you’ve placed on an Azure Virtual Network.

Network access control

Network access control is as important on Azure Virtual Networks as it is on-premises. The principle of least privilege applies on-premises and in the cloud. One way you do enforce network access controls in Azure is by taking advantage of Network Security Groups (NSGs).

The name might be a little confusing. When you hear “Network Security Group,” you might think it’s related to a collection of network devices that are grouped in a way that allows for common or centralized security management. Or maybe you’d think such a group might be a collection of VMs that belong to the same security zone. Both of these assumptions would be wrong.

A Network Security Group is the equivalent of a simple stateful packet filtering firewall or router. This is similar to the type of firewalling that was done in the 1990s. That is not said to be negative about NSGs, but to make it clear that some techniques of network access control have survived the test of time.

The “Group” part of the NSG name refers to a group of firewall rules that you configure for the NSG. This group of rules defines allow and deny decisions that the NSG uses to allow or deny traffic for a particular source or destination.

NSGs use a 5-tuple to evaluate traffic:

  • Source and destination IP address

  • Source and destination port

  • Protocol: transmission control protocol (TCP) or user datagram protocol (UDP)

This means you can control access between a single VM and a group of VMs, or a single VM to another single VM, or between entire subnets. Again, keep in mind that this is simple stateful packet filtering, not full packet inspection. There is no protocol validation or network level intrusion detection system (IDS) or intrusion prevention system (IPS) capability in a Network Security Group.

An NSG comes with some built-in rules that you should be aware of. These are:

  • Allow all traffic within a specific virtual network All VMs on the same Azure Virtual Network can communicate with each other.

  • Allow Azure load balancing inbound This rule allows traffic from any source address to any destination address for the Azure load balancer.

  • Deny all inbound This rule blocks all traffic sourcing from the Internet that you haven’t explicitly allowed.

  • Allow all traffic outbound to the Internet This rule allows VMs to initiate connections to the Internet. If you do not want these connections initiated, you need to create a rule to block those connections or enforce forced tunneling (which is explained later).

Routing tables

In the early days of Azure, some might have been a bit confused by the rationale of allowing customers to subnet their Azure Virtual Networks. The question was “What’s the point of subnetting, if there’s no way to exercise access controls or control routing between the subnets?” At that time, it seemed that the Azure Virtual Network, no matter how large the address block you chose and how many subnets you defined, was just a large flat network that defied the rules of TCP/IP networking.

Of course, the reason for that was because no documentation existed regarding what is known as “default system routes.” When you create an Azure Virtual Network and then define subnets within it, Azure automatically creates a collection of system routes that allows machines on the various subnets you’ve created to communicate with each other. You don’t have to define the routes, and the appropriate gateway addresses are automatically assigned by the DHCP server–provided addresses.

Default system routes allow Azure VMs to communicate across a variety of scenarios, such as:

  • Communicating between subnets.

  • Communicating with devices on the Internet.

  • Communicating with VMs that are located on a different Azure Virtual Network (when those Azure Virtual Networks are connected to each other over a site-to-site VPN running over the Azure fabric).

  • Communicating with resources on your on-premises network, either over a site-to-site VPN or over a dedicated WAN link (these options are explained later in the chapter).

That said, sometimes you might not want to use all of the default routes. This might be the case in two scenarios:

  • You have a virtual network security device on an Azure Virtual Network and you want to pump all traffic through that device. (Virtual network security devices are explained later in the chapter.)

  • You want to make sure that VMs on your Azure Virtual Network cannot initiate outbound connections to the Internet.

In the first scenario, you might have a virtual network security device in place that all traffic must go through so that it can be inspected. This might be a virtual IDS/IPS, a virtual firewall, a web proxy, or a data leakage protection device. Regardless of the specific function, you need to make sure that all traffic goes through it.

In the second scenario, you should ensure that VMs cannot initiate connections to the Internet. This is different from allowing VMs to respond to inbound requests from the Internet. (Of course, you have to configure a Network Security Group to allow those connections.) Also ensure that all outbound connections to the Internet that are initiated by the VMs go back through your on-premises network and out your on-premises network security devices, such as firewalls or web proxies.

The solution for both of these problems is to take advantage of User Defined Routes. In Azure, you can use User Defined Routes to control the entries in the routing table and override the default settings.

For a virtual network security device, you configure the Azure routing table to forward all outbound and inbound connections through that device. When you want to prevent VMs from initiating outbound connections to the Internet, you configure forced tunneling.

Remote access (Azure gateway/point-to-site VPN/RDP/Remote PowerShell/SSH)

One big difference between on-premises computing and public cloud computing is that in public cloud computing you don’t have the same level of access to the VMs as you do on-premises. When you run your own virtualization infrastructure, you can directly access the VMs over the virtual machine bus (VMbus). Access through the VMbus takes advantage of hooks in the virtual platform to the VM so that you don’t need to go over the virtual networking infrastructure.

This isn’t to say that accessing a virtual machine over the VMbus is easy to achieve. There are strong access controls over VMbus access, just as you would have for any network-level access. The difference is that VMbus access for on-premises (and cloud) virtualization platforms is tightly controlled and limited to administrators of the platform. Owners of the virtual machines or the services that run on the virtual machines typically aren’t allowed access over the VMbus—and if they are, this level of access is often temporary and can be revoked any time the virtualization administrators decide it’s necessary.

When you have VMs on a cloud service provider’s network, you’re no longer the administrator of the virtualization platform. This means you no longer have direct virtual machine access over the virtualization platform’s VMbus. The end result is that to reach the virtual machine for configuration and management, you need to do it over a network connection.

In addition to needing to go over a network connection, you should use a remote network connection. This might be over the Internet or over a dedicated WAN link. Cross-premises connectivity options (so-called “hybrid network connections”) are explained in the next topic. This section focuses on remote access connections that you use over the Internet for the express purpose of managing VMs and the services running on the VMs.

Your options are:

  • Remote Desktop Protocol (RDP)

  • Secure Shell Protocol (SSH)

  • Secure Socket Tunneling Protocol (SSTP)–based point-to-site VPN

Each of these methods of remote access depends on the Azure Virtual Network Gateway. This gateway can be considered the primary ingress point from the Internet into your Azure Virtual Network.

Remote Desktop Protocol

One of the easiest ways to gain remote access to a VM on an Azure Virtual Network is to use the Remote Desktop Protocol (RDP). RDP allows you to access the desktop interface of a VM on an Azure Virtual Network in the same way it does on any on-premises network. It is simple to create a Network Security Group rule that allows inbound access from the Internet to a VM by using RDP.

What’s important to be aware of is that when you allow RDP to access a VM from over the Internet, you’re allowing direct connections to an individual VM. No authentication gateways or proxies are in the path—you connect to a VM.

Like all simple things, using RDP might not be the best option for secure remote access to VMs. The reason for this is that RDP ports are often found to be under constant attack. Attackers typically try to use brute force to get credentials in an attempt to log onto VMs on Azure Virtual Networks. Although brute-force attacks can be slowed down and mitigated by complex user names and passwords, in many cases, VMs that are not compromised are considered temporary VMs and therefore do not have complex user names and passwords.

You might think that if these are temporary VMs, no loss or risk is involved with them being compromised. The problem with this is that sometimes customers put these temporary VMs on Azure Virtual Networks that have development VMs, or even production VMs, on them. Compromising these temporary VMs provides an attacker with an initial foothold into your deployment from which they can expand their breach. You don’t want that to happen.

RDP is easy, and if you’re sure that you’re just testing the services and the VMs in the service, and you have no plans to do anything significant with them, then this scenario is reasonable. As you move from pure testing into something more serious, you should look at other ways to reach your VMs over the Internet. Other methods are described later in this chapter.

Secure Shell Protocol

Remote Desktop Protocol and the Secure Shell Protocol (SSH) are similar in the following ways:

  • Both can be used to access both Windows and Linux VMs that are placed on an Azure Virtual Network.

  • Both provide for direct connectivity to individual VMs.

  • User names and passwords can be accessed by brute force.

As with RDP, you should avoid brute-force attacks. Therefore, as a best practice, you should limit direct access to VMs by using SSH over the Internet. An explanation of how you can use SSH more securely is provided in the next section.

SSTP-based point-to-site VPN

Although “point-to-site” VPN in relation to Azure might sound like a new VPN-type technology (sort of like how so-called “SSL-VPN” is not really a VPN in many cases), it’s not new. Rather, it’s a new name applied to traditional remote access VPN client/server connections, which has been around a long time. What makes point-to-site VPN special is the VPN protocol that’s used, which is the Secure Socket Tunneling Protocol (SSTP).

The SSTP VPN protocol is interesting because, unlike other methods of remote access VPN client/server connections (such as IPsec, LTP/IPsec, or PPTP), the SSTP protocol tunnels communications over the Internet by using a TLS-encrypted HTTP header. What this means in practice is that SSTP can be used across firewalls and web proxies. Some people might find it funny to hear someone say that SSTP can be used to get across “restrictive firewalls” because it uses TCP 443 to connect to the VPN gateway server from your Azure Virtual Network. It sounds funny because, among network security and firewall experts, TCP port 443 is known as the “universal firewall port.” That is to say, if you allow outbound TCP 443, you allow just about everything.

For those of you who are not networking experts, you should understand what a remote access VPN client/server connection is and how it works (from a high level).

When you establish a VPN connection, what you’re doing is creating a virtual “link layer” connection. (Think of an Ethernet cable connection as a link-layer connection.) The amazing thing about VPN is that this link-layer connection actually happens over the Internet and you can use it to establish that connection with a VPN server. In the case of Azure, you’re establishing that connection between your laptop and the Azure gateway.

The link-layer connection is like a virtual cable (referred to in this book as a “tunnel”) and you can pass just about any kind of network traffic through that tunnel. This is useful because the tunnel is encrypted, so no one can see inside the tunnel because the traffic inside the tunnel moves over the Internet.

After the VPN connection is established between your laptop and the Azure VPN gateway, your laptop isn’t connected to a specific Azure VM. Instead, your laptop is connected to an entire Azure Virtual Network, and with this connection, you can reach all the VMs on that Azure Virtual Network. This helps you make RDP and SSH connections more secure. But how does it do that?

The key here is that in order to establish the point-to-site VPN connection, you have to authenticate with the VPN gateway. The Azure VPN gateway and VPN client both use certificates to authenticate with each other. Certificate authentication isn’t susceptible to brute-force attacks like direct RDP or SSH connections over the Internet can be. This is a nice security advantage.

The big advantage comes from the fact that you can run RDP or SSH traffic inside the SSTP VPN tunnel. After you establish the point-to-site VPN connection, you can start your RDP or SSH client application on your laptop and connect to the IP address of the VM on the Azure Virtual Network that you’re connected to. Of course, you have to authenticate again to access the VM.

This means that you can block direct inbound access for the RDP and SSH protocols to VMs on your Azure Virtual Network over the Internet and still reach them by using those protocols after you establish the VPN connection. This entire process is secure because you have to authenticate the VPN connection first, and then authenticate again with the RDP or SSH protocols.

Cross-premises connectivity

The previous section explained how you can connect a single device like a laptop or tablet to an Azure Virtual Network to gain network access to all the VMs connected to that Azure Virtual Network. This section explains how you can connect an entire network to an Azure Virtual Network.

This introduces the topic of what is known as “cross-premises connectivity.” Probably a better term would be “across sites” connectivity, but that doesn’t sound as fancy. Regardless, what this term means is connectivity between two sites. The first site is usually your on-premises network (which is a network that your organization owns and controls) and an Azure Virtual Network. When cross-premises connectivity is enabled, you can pass traffic between the on-premises network and your Azure Virtual Network.

You can do this in two ways with Azure:

  • Site-to-site VPN

  • Dedicated WAN link

Site-to-site VPN

Site-to-site VPN is similar to the point-to-site VPN described earlier. Recall that with a point-to-site VPN, you can connect a single device (at a time) to an Azure Virtual Network. To be clear, that doesn’t mean that when you use a point-to-site VPN you can only connect a single device at a time, which would block all other connections to the Azure Virtual Network. What it means is that when you use a point-to-site VPN, only that device is connected to the Azure Virtual Network. Other devices can connect to the same Azure Virtual Network by using a point-to-site VPN at the same time.

In contrast to a point-to-site VPN, with a site-to-site VPN, you can connect an entire network to an Azure Virtual Network. Site-to-site VPNs are sometimes called “gateway-to-gateway” VPNs because each end of the connection is a VPN gateway device.

VPN gateways are like routers. On a non-VPN network, a router is used to route packets to different subnets on your on-premises network. The routed connections go over Ethernet or wireless connections. A VPN gateway acts as a router too, but in the case of the VPN gateway, connections routed over the VPN gateway are not routed from one subnet to another subnet on your on-premises network. Instead, they are routed from your on-premises network to another network over the Internet by using a VPN tunnel. Of course, the remote network can also route packets back to your on-premises network.

When you use a site-to-site VPN with an Azure Virtual Network, you route packets to and from the Azure Virtual Network and your on-premises networks. You must have a VPN gateway on your on-premises network that works with the VPN gateway used by Azure. Most industry standard on-premises VPN gateways work with the Azure VPN gateway. Note that in contrast to the point-to-site VPN connections that use SSTP, the site-to-site VPN uses IPsec tunnel mode for the site-to-site VPN connection.

Using site-to-site VPN connections has a couple of downsides:

  • Connections to Azure top out at around 200 megabits per second (Mbps).

  • They, by definition, traverse the Internet, which could be a security issue.

The first issue really isn’t a security problem, although it’s related to performance and performance limitations, which can bleed into availability, which could lead to problems with the “A” in the confidentiality, integrity, and availability (CIA) triad of security. If you exceed your site-to-site VPN bandwidth and your users and devices can’t get to what they need on your Azure Virtual Network, then you have a compromise in availability, and, hence, security issues. That is to say, you’ve essentially created a denial-of-service (DOS) attack on yourself because you chose a connectivity option that doesn’t support your application and infrastructure requirements.

The second issue is more of a classic security problem. Any traffic that moves over the Internet will potentially be exposed to “hacking,” “cracking,” redirection, and other attempts to compromise the data. Although it is true that the site-to-site VPN uses a more secure IPsec tunnel that supports the latest cipher suites and modern encryption technologies, there is always the chance that if you have a dedicated attacker that wants your information, he will find weaknesses and compromise the data within the tunnel.

That said, if the attacker wants your data that much, he can find easier ways to get to it than to try and compromise your site-to-site VPN connection. But the possibility should be mentioned because the topic of this chapter and book is network security.

For environments that need the highest level of security and performance, you should review the option discussed in the next section, a dedicated WAN link.

Dedicated WAN link

A dedicated WAN link is a permanent connection between your on-premises network and another network. With Azure Virtual Network, a dedicated WAN link provides a permanent connection between your on-premises network and an Azure Virtual Network. These dedicated WAN links are provided by telco providers and do not traverse the Internet. These connections are private, physical connections between your network and an Azure Virtual Network.

Microsoft provides you with the option to create a dedicated WAN link between your on-premises network and Azure Virtual Network by using ExpressRoute. (The name might change over time, so be sure to check the Azure Security Team blog on a regular basis.)

ExpressRoute provides you with:

  • Up to 10 gigabits per second (Gbps) of connectivity between your on-premises network and an Azure Virtual Network.

  • A dedicated, private connection that does not traverse the Internet.

  • A service-level agreement (SLA) that guarantees uptime and performance.

As you can see, the level of performance you get with an ExpressRoute dedicated WAN link far exceeds what you get from a site-to-site VPN. That 10 Gbps is 50 times the maximum speed available with any site-to-site VPN you can establish to an Azure Virtual Network.

The security advantage is clear: the connection doesn’t traverse the Internet and therefore isn’t exposed to all the potential risks that are inherent in an Internet connection. Sure, someone might be able to gain access to the telco, but the odds of that happening are much lower than the security risks that you’re exposed to on the Internet.

The SLAs are important. With a site-to-site VPN, you’re depending on the Internet. The Internet doesn’t have SLAs, and you get the best possible effort from all the telco providers and the network they manage. Your packets move over a number of networks and you hope for the best, but in no way can anyone guarantee you uptime or performance. That’s how the Internet-at-large works.

With dedicated WAN links, the telco providers control the entire network. From your premises or co-location, the telco controls all traffic and performance across the channel. They can identify where problems are and fix them, and they can improve performance anywhere they want in the path. That’s why dedicated WAN links are so efficient and expensive.

Note that ExpressRoute provides two different types of dedicated WAN links:

  • Multiprotocol line switching (MPLS) to your on-premises network

  • Exchange Provider connectivity, where the ExpressRoute connection terminates at a Telco Exchange Provider location

The MPLS version of ExpressRoute tops out at around 1 Gbps, whereas the Exchange Provider option provides you with up to 10 Gbps.

Network availability

As explained earlier, the “A” in the CIA security triad is availability. From a network perspective, you should ensure that your services are always available and that you take advantage of network availability technologies. Azure has a few availability services that you can take advantage of:

  • External load balancing

  • Internal load balancing

  • Global load balancing

External load balancing

To understand external load balancing, imagine that you have a three-tier application: a web front end, an application logic middle tier, and a database back end. The web front-end servers accept incoming connections from the Internet. Because the web front-end servers are stateless servers (that is to say, no information is stored on the servers that needs to persist beyond sessions), you deploy several of them. It doesn’t matter which of these your users connect to, because they are all the same and they all forward connections to the middle-tier application logic servers.

To get the highest level of availability and performance from the web front-end server, you should ensure that all the incoming connections are equally distributed to each of the web front ends. You should avoid a situation where one server gets too much traffic. This kind of situation decreases application performance and possibly could make the application unavailable if that server becomes unavailable. To solve this problem, you can use external load balancing.

When you use external load balancing, incoming Internet connections are distributed among your VMs. For web front-end servers, external load balancing ensures that connections from your users are evenly distributed among those servers. This improves performance because no VM is handling an excessive load, and also improves uptime because if for some reason one or more VMs fail, other VMs you’ve configured the connections to be load balanced to are able to accept the connections.

Internal load balancing

External load balancing is used for incoming connections from the Internet. If you refer back to the example three-tier application discussed earlier, you might want to load balance the other tiers in the solution. The application logic tier and the database tiers are different from the web front ends because they are not Internet facing. These tiers do not allow incoming connections to the Internet. In fact, it’s likely that you’ll configure them so that these other tiers are only allowed to communicate with VMs that they must communicate with.

For example, the application logic tier needs to accept incoming connections from the web front ends, and no other service (for the time being, ignore the discussion about management access). In this case, you configure a Network Security Group so that the application logic tier VMs can accept only incoming connections from the web front-end servers.

Similarly, the database tier VMs do not need to accept connections from the Internet or the web front ends; they need to accept incoming connections from the application logic VMs only. In this case, you configure a Network Security Group in such a way as to allow only incoming connections from an application logic tier.

That handles the connection security part of the puzzle, but you still have the availability component to deal with. The web front ends have external load balancing to help them out with that. But what about the application logic and database servers?

For these VMs, you should use internal load balancing.

Internal load balancing works the same way as external load balancing, with the difference being that the source and destination VMs are internal; no source or destination devices are on the Internet for internal load balancing. The source and destination can all be on Azure Virtual Networks, or on an Azure Virtual Network and an on-premises network.

Global load balancing

With cloud computing in Microsoft Azure, you can massively scale your applications—so much so that you can make applications available to almost anybody in the world, and each user, regardless of location, can have great performance and availability.

So far, you’ve read about external and internal load balancing to improve availability. Although these technologies are critical to ensuring that your applications are always online, they suffer from the same limitation: they work on a per-datacenter basis. That is to say, you can only configure internal and external load balancing among VMs located in the same datacenter.

At first you might think that’s not a big problem. The Azure datacenters are large, they have a huge reserve capacity, and if one or a thousand physical servers in the Azure datacenter go down, you’ll still be up and running because of the built-in redundancy in the Azure fabric.

Although that’s all true, you should consider what might happen if an entire datacenter becomes unavailable, or even an entire region. It’s possible that the power might go out for a datacenter or an entire region due to some kind of natural or unnatural event. If your solutions are confirmed to a single datacenter or region, you might suffer from outages.

If you want to better secure yourself against these kinds of outages, you should design your applications to take advantage of the scalability of the cloud. Azure makes it possible for you to increase the scale by placing components of your applications all over the world.

The trick is to make sure that your users can access those applications. To do this, you should take advantage of something known as a “global load balancer.” Global load balancers take advantage of the Domain Name System (DNS) to ensure that:

  • Users access your service by connecting to the datacenter closest to that user. (For example, if a user is in Australia, that user connects to the Australian Azure datacenter and not one in North America.)

  • Users access your service by connecting to the closest alternate datacenter if the closest datacenter is offline.

  • Users have the best experience with your service by accessing the datacenter that is most responsive, regardless of the location of that datacenter.

Azure provides you with a global load balancer in the form of Azure Traffic Manager. With Azure Traffic Manager, you can:

  • Improve the availability and responsiveness of your applications.

  • Perform maintenance tasks or upgrade your applications and have them remain online and available by having users connect to an alternate location.

  • Distribute and load balance traffic for complex applications that require specific load balancing requirements.

Network logging

It’s standard practice to access network information from the network itself. You can do this in many ways, but typically enterprise organizations include some kind of network intrusion detection system (NIDS) inline so that all network traffic can be monitored. It’s a matter of opinion regarding how valuable such devices are, given the large number of never-seen alerts that are generated, and if seen, that are never addressed.

Regardless, there is some value in having visibility at the network level. For that reason, many customers are interested in how they can get the same or similar level of visibility into network traffic on their Azure Virtual Network.

At the time this chapter was written, you can’t get the same level of visibility into network traffic that you can get on-premises. Many of the on-premises devices work at the Link layer (OSI layer 2), which is not available on Azure Virtual Networks. The reason for this is that Azure Virtual Networks make use of software-defined networking and network virtualization, so the lowest level of traffic analysis you can get is at the Network layer (OSI layer 3).

It is possible to get network layer network information if you want to push all traffic through a virtual network security device. That is pretty easy to do for traffic destined to and from a particular network subnet, and the way you do it on an Azure Virtual Network is the same as you would do it on-premises: you ensure that the virtual network security device is in the path to the destination subnet by configuring the routing tables on your Azure Virtual Network. You do this by configuring User Defined Routes, which were described earlier in this chapter.

Although this is easy for inter-subnet communications, it’s not easy if you want to see what’s happening between two VMs on the same subnet on your Azure Virtual Network. The reason for this is that you can’t easily take advantage of an intermediary virtual network security device between subnets, because the two VMs that are communicating with each other are on the same subnet. That doesn’t mean it can’t be done. You can install a VM on each subnet that acts as a proxy (web proxy and perhaps a SOCKS proxy). Then all communications are sent to the proxy, and the proxy forwards the connections to the destination host on the same subnet. As you can imagine, this can end up being complex and unwieldy if you have even a few subnets.

At this time, you have the ability to get some network information for traffic that moves through Network Security Groups. In particular, you can:

  • Use Azure audit logs to get information about connections made through a Network Security Group.

  • View which Network Security Group rules are applied to VMs and instance roles based on the MAC address.

  • View how many times each Network Security Group rule was applied to deny or allow traffic.

Although this is much less than you can do on-premises, the situation will most likely change soon. In fact, by the time you read this chapter, you might be able to obtain network information and bring your level of access much closer to what you have on-premises. Be sure to check the Azure Security Blog on a regular basis, where that information will be shared with you when it becomes available.

Public name resolution

Although the Azure DNS service is not strictly a security offering and it doesn’t necessarily connect to any specific security scenario, you should be aware that Azure has a DNS server.

You can configure DNS zones in Azure DNS. However, Azure does not provide DNS registrar services, so you’ll need to register your DNS domain name with a commercial domain registrar.

Network security appliances

You’ve read about the option to use virtual network security appliances in a number of places in this chapter. A virtual network security appliance is a VM that you can obtain from the Azure Marketplace and is usually provided by an Azure partner. These VMs are similar or the same as the network security device VMs you might be using on-premises today. Most of the major network security appliance vendors have their offerings in the Azure Marketplace today, and new ones are added daily. If you don’t find what you want today, be sure to check tomorrow.

Reverse proxy

The final Azure Network component to cover before moving on to the Azure network security best practices section is that of a reverse proxy. If you are relatively new to networking, or haven’t delved into networking beyond what you needed to know, you might not be familiar with the concept of proxy or reverse proxy.

A proxy is a network device that accepts connections for other devices and then recreates that connection to forward the connection request and subsequent packets to the destination. The proxy device, as the name implies, acts on behalf of the computer that is sending the request or the response. The proxy sits in the middle of the communications channel and, because of that, can do many security-related “things” that can help secure your network and the devices within it.

The most popular type of proxy is the “web proxy.” A web proxy accepts connections from a web proxy client (typically a browser configured to use the IP address of the web proxy as its web proxy). When the web proxy receives the request, it can inspect the nature of the request and then recreate the request on behalf of the web proxy client. When the destination website responds, the web proxy receives the response on behalf of the web proxy client, and it can inspect the response. After receiving the response, the web proxy client forwards the response traffic back to the client that made the original request.

Why are proxy devices useful in a security context? Some of the things they can do include the following:

  • Require the requestor to authenticate before the proxy accepts and forwards the connection request.

  • Inspect the destination URL to determine whether the destination is safe or dangerous; if it’s dangerous, the proxy can block the connection.

  • Look at request and response traffic to determine whether there is dangerous payload, such as viruses or other malware, and block the malware from being delivered.

  • “Crack open” encrypted communications between client and destination server (such as SSL connections) so that malware, leaked data, and other information that shouldn’t be crossing the proxy boundary is stopped at the proxy. This type of “SSL bridging” can significantly improve security, because attackers often hide what they’re trying to accomplish by encrypting communications, which normally works because most communications are not subject to SSL bridging.

Proxy devices can do these things and a lot more. One could write a book about just proxies. But this book and chapter aren’t about proxies, so don’t dig deeper into them than necessary.

The reason to bring up proxies in this chapter is that Azure has a reverse proxy service that you can use to proxy connections to your on-premises resources. The reverse proxy service is called Azure Active Directory Application Proxy. You won’t find this service in the list of Azure Active Directory products on, and you won’t find it in the table of contents. However, you’ll learn about it in this book.

Before going any further, it’s important that you understand the difference between a “forward proxy” and a “reverse proxy.” A forward proxy accepts connections from clients on your on-premises network and forwards those connections to servers on the Internet (or on networks other than the one on which the clients are making the requests). In contrast, a reverse proxy is one that accepts connections from external clients and forwards them to servers on your on-premises network. For those of you with a lot of experience in this area, you recognize that this is a bit of an oversimplification, but it does describe in general the differences between a forward and reverse proxy.

Traditional reverse proxy devices are typically placed near the edge of your on-premises network. Servers such as mail servers and collaboration servers (like Microsoft Exchange and SharePoint) can be reached by Internet-based clients through the reverse proxy server. Microsoft used to have its own reverse proxy servers named Internet Security and Acceleration Server (ISA Server) and Threat Management Gateway (TMG). Both those products were excellent but unfortunately were discontinued.

With that said, maintaining on-premises proxy servers can be a lot of work. If you don’t manage them well, they can take down your services, which makes no one happy. What if you could hand the management, troubleshooting, and updating of your reverse proxy server to someone else and avoid all that hassle?

That’s the core value of the public cloud, and the core value of using the Azure Application Proxy server instead of using an on-premises reverse proxy.

The Azure Application Proxy is already built into Azure, and you configure it so that when client systems want to request resources on your on-premises servers, they actually make the request to the reverse proxy on Azure. The Azure Application Proxy forwards those requests back to your on-premises servers.

Like most reverse proxy solutions, they add a measure of security. Here are some things you can do with the Azure Application reverse proxy service:

  • Enable single sign-on for on-premises applications.

  • Enforce conditional access, which helps you to define whether or not a user can access the application based on the user’s current location (on or off the corporate network).

  • Authenticate users before their connections are forwarded to the Azure Application Proxy.