Azure network security

  • 10/4/2016
Contents
×
  1. Anatomy of Azure networking
  2. Azure Network Security best practices

Azure Network Security best practices

At this point, you should have a good understanding of what Azure has to offer in the network security space. This chapter provided information about all the major components of Azure networking that have some kind of tie to security, and went over a number of examples so that you have context for each of the components. If you remember and understand everything you’ve read so far, consider yourself in the top 10 percent of the class when it comes to Azure networking.

Although understanding the various aspects of Azure networking is required to ensure that your deployments are secure, knowing what those aspects are and how they work is the first step. What you should do now is put that knowledge into action by learning a few best practices.

One more thing about best practices: one size does not fit all. Although these best practices are good things to do in most cases, they aren’t good things to do in all cases. You always have to consider the environment in which you’re considering these best practices. Sometimes you won’t need to use one of these best practices because they just don’t apply. Use your best judgment and do what is best for your network.

This section covers the following Azure networking best practices:

  • Subnet your networks based on security zones.

  • Use Network Security Groups carefully.

  • Use site-to-site VPN to connect Azure Virtual Networks.

  • Configure host-based firewalls on infrastructure as a service (IaaS) virtual machines.

  • Configure User Defined Routes to control traffic.

  • Require forced tunneling.

  • Deploy virtual network security appliances.

  • Create perimeter networks for Internet-facing devices.

  • Use ExpressRoute.

  • Optimize uptime and performance.

  • Disable management protocols to virtual machines.

  • Enable Azure Security Center.

  • Extend your datacenter into Azure.

Subnet your networks based on security zones

As mentioned earlier, in the section about Azure Virtual Networks, when you create a new Azure Virtual Network, you’re asked to select an IP address space in the Class A, B, or C range. These Azure Virtual Network IP address ranges are large, so you should always create multiple subnets. This is no different than what you do on-premises today.

One thing you should think about is what IP address space you want to use on your Azure Virtual Network. If you plan to connect your on-premises network to one or more Azure Virtual Networks, you need to ensure that there are no IP address conflicts. That is to say, you have to ensure that the IP address ranges you select and the subnets you create on your Azure Virtual Networks do not overlap with what you have on-premises. If there is overlap, that would cause routing table conflicts, and traffic will not be routed correctly to your subnets on your Azure Virtual Networks.

After you decide on your IP address range for your Azure Virtual Network, the next step is deciding how you want to define your subnets. One approach is to define your subnets based on the roles of the VMs you intend to place on those subnets.

For example, suppose you have the following classes of services you want to deploy on an Azure Virtual Network:

  • Active Directory Domain Controllers You want these to support domain-joined VMs on your Azure Virtual Network.

  • Web front-end servers You use these to support your three-tier applications.

  • Application logic servers You use these to support middleware functions for your three-tier applications.

  • Database servers You use these as the database back ends for your three-tier applications.

  • Update servers You use these servers to centralize operating system and application updates for the VMs on your Azure Virtual Network.

  • DNS servers You use these to support Active Directory and non–Active Directory name resolution for servers on your Azure Virtual Network.

You could create just one big subnet and put all your VMs on the same subnet. However, that’s not a great way to help you enable secure network access control. A better solution is to define subnets for each of these roles and then put each VM that supports these roles into a subnet created for each role. That leads you to putting the domain controllers on the domain controllers’ subnet, the database servers on the database server subnet, the web front ends on the web front-ends subnet, and so on.

Not only does this help you keep track of where the various servers that participate in each role are located, it also makes it much easier to manage network access controls. For example, if you choose to use NSGs for network access control, you can create a set of rules that is appropriate for all the VMs on the particular subnet. If you need to put another VM on one of the subnets, you don’t need to update the NSG rules, because the existing rules will support all the machines on the subnet because they perform the same roles.

To make this clearer, consider the following simple situation with two subnets:

  • Web front-end subnet

  • Application logic subnet

Only web front-end VMs go into the web front-end subnet, and only application logic VMs go into the application logic subnet.

The rules for the web front-end subnet might look like this:

  • Allow inbound TCP port 443 from the Internet to all IP addresses on the web front-end subnet.

  • Allow outbound TCP port 443 from the web front-end subnet to all IP addresses in the application logic server’s subnet.

The rules for the application logic server subnet might look like this:

  • Allow inbound TCP port 443 from the web front-end subnet to all IP addresses on the application logic server’s subnet.

  • Allow outbound TCP port 1433 from the application logic server’s subnet to all IP addresses on the database server subnet.

With these basic rules in place, you can easily put more front-end web servers onto the front-end web server’s subnet without having to make any changes in the Network Security Group rules. The same goes with the application logic server’s subnet.

Use Network Security Groups carefully

Although Network Security Groups are useful for basic network access control, keep in mind that they do not provide you any level of application layer inspection. All you have control over is the source and destination IP address, source and destination TCP or UDP port number, and the direction to allow access.

Another thing to be aware of is that if you want to create restrictive access rules with Network Security Groups, you have to be aware of what you might inadvertently block. Here are a few examples:

  • VMs need to be able to communicate with IP addresses specific to the host operating system to get DHCP information. If you block access to this host port (which you need to discover by checking an ipconfig on your VMs to see what IP address is being used by the DHCP server), then your VMs will not be able to communicate with the DHCP server and will not be assigned IP addressing information.

  • The DHCP server not only assigns an IP address to the VMs; it also assigns a DNS server and a default gateway. The DNS server will be a host server IP address (that is to say, an IP address owned by the host server, not by you), and the default gateway will be an address on your Azure Virtual Network subnet. If you block access to these IP addresses, you won’t be able to perform name resolution or reach remote subnets. Neither of these conditions leads to trouble-free performance.

  • Another scenario you might not think of is communications outside of your Azure Virtual Network, but still within the Azure fabric itself; for example, when you encrypt Azure Virtual Machines by using Azure Disk Encryption. To encrypt your operating system and data disks, the VM needs to be able to reach the Azure Key Vault Service and an Azure Application (these are prerequisites for Azure Disk Encryption). If you lock down your NSGs too tight, you won’t be able to reach the Key Vault or the Azure Active Directory application, and your VMs won’t start because the disks can’t be unencrypted.

These are just a few examples. The message is to test your NSG rules thoroughly before going into production. By thoroughly testing, you won’t have to deal with nasty surprises that might turn a successful deployment into a painful experience.

Use site-to-site VPN to connect Azure Virtual Networks

Eventually, you might decide you want to move the majority of your on-premises services into the Azure public cloud. You are likely going to find that as your presence in Azure grows, so will your need to use multiple Azure Virtual Networks.

You might want more than one Azure Virtual Network for many reasons. Some examples include:

  • You have multiple on-premises datacenters and you want to connect to Azure Virtual Networks that are closest to the datacenter.

  • You want resources in one region to be able to communicate with resources in another region over the fastest route possible.

  • You want to use different Azure Virtual Networks to manage different classes of services, or assign them to different departments, or even different divisions or subsidiaries within your company.

These are just three examples, but you can probably come up with more. The point is that Azure Virtual Networks can grow as quickly as your on-premises network has over time. And at some point, you’re going to want to connect some of those Azure Virtual Networks to one another.

The best way to do this is to connect them to each other over a site-to-site VPN connection over the Azure fabric. The site-to-site connection between the Azure Virtual Networks is similar to the site-to-site connection you establish between your on-premises network and an Azure Virtual Network. The difference is that the entire communications path between the Azure Virtual Networks is contained within the highly optimized Azure fabric itself.

An alternative to this approach is to have the Azure Virtual Networks communicate with each other over the Internet. This approach has security and performance implications that make it inferior to site-to-site VPN over the Azure fabric. Another alternative is to loop back through your on-premises network and out through another gateway on your network. In most cases, this is also a less efficient and potentially less secure solution.

Configure host-based firewalls on IaaS virtual machines

This is a best practice on-premises and in the cloud. Regardless of what operating system you deploy in Azure Virtual Machines, you want to make sure that a host-based firewall is enabled, just as you do on-premises.

Another feature of the host-based firewall on Windows virtual machines is IPsec. Although IPsec for intranet communications isn’t widely used, there’s always a good reason to turn on IPsec—that reason being that no network can be trusted and, therefore, regardless of where that network is and who owns and operates it, you should always consider any network (wired and wireless) untrusted and untrustable.

The dichotomy of the “trusted corporate network” versus “untrusted non-corporate networks” sounded good in the past before the widespread use of the Internet. But with the collision of multiple trends, such as cloud computing, using your own device, multi-homed devices (wireless devices that connect to a corporate network and other wireless networks at the same time), and numerous portable storage devices of all shapes, sizes, and capacities, it’s not realistic to think that there is a material difference between the innate security of your on-premises network and any other network, including the Internet.

Well, there might be, but to think and act otherwise puts you at more risk than you need to be. That’s why you should use IPsec for all communications that aren’t encrypted by some other method (such as HTTPS or encrypted SMB 3.0). You can use IPsec on Azure Virtual Network to authenticate and encrypt all wire communications between VMs on the Azure Virtual Network, in addition to communications between those VMs in Azure and any devices you have on-premises.

If you do choose to use IPsec, be careful not to block host ports responsible for DHCP and DNS resolution, in addition to the default gateway and any storage addresses your VM might need access to.

Configure User Defined Routes to control traffic

When you put a VM on an Azure Virtual Network, you might notice that the VM can connect to any other VM on the same Azure Virtual Network, even if the other VMs are on different subnets. This is possible because there is a collection of system routes that are enabled by default that allow this type of communication. These default routes allow VMs on the same Azure Virtual Network to initiate connections with each other, and with the Internet (for outbound communications to the Internet only).

Although the default system routes are useful for many deployment scenarios, sometimes you might want to customize the routing configuration for your deployments. These customizations allow you to configure the next hop address to reach specific destinations.

You should configure User Defined Routes when you deploy a virtual network security appliance, which is described in a later best practice.

There are other scenarios where you might want to configure custom routes. For example, you might have multiple network security appliances that you want to forward traffic to on the same or other Azure Virtual Networks. You might even have multiple gateways you want to use, such as a scenario where you have a cross-premises connection between your Azure Virtual Network and your on-premises location, in addition to a site-to-site VPN that connects your Azure Virtual Network to another Azure Virtual Network or even multiple Azure Virtual Networks.

Just as in the on-premises world, you might end up requiring a complex routing infrastructure to support your network security requirements. For this reason, paying close attention to your User Defined Routes will significantly improve your overall network security. Of course, ensure that you document all your customizations and include the rationale behind each one you make.

Require forced tunneling

If you haven’t spent a lot of time in the networking space, the term “forced tunneling” might sound a little odd. In the context of Azure networking, it can sound odd even to those who have experience in networking.

To understand why the term might sound odd, it helps to understand where the term comes from. First, what is “tunneling”? As explained earlier in this chapter when we covered VPN technologies, tunneling is a way to move data through an encrypted channel. (For you network purists out there, yes, you can tunnel within non-encrypted protocols, but let’s keep it simple here.) When you establish a VPN connection, you create an encrypted tunnel between two network devices. After the tunnel is established, information can travel more securely within that tunnel.

Now consider a common scenario that many have experienced. You’re at a hotel room and need to create a VPN connection between your laptop and the VPN server at your company. You use whatever software you need to use to establish the VPN connection. After you establish the connection, you can access servers and services on the corporate network as though you are directly connected to the corporate network.

Let’s say that you want to go to a non-corporate website. You open your browser, enter the address, and go to the site. Does that connection go over the VPN connection and out your corporate firewalls, and then back through your corporate firewalls and back to your laptop over the VPN connection for the response?

It depends.

If your computer is configured to allow split-tunneling when using the VPN connection, it means that your computer will access the site by going over the Internet—it will not try to reach the site by going through your VPN connection to the corporate network and out to the Internet through your corporate network firewalls.

Most organizations consider this a security risk because when split-tunneling is enabled, your computer can essentially act as a bridge between the Internet and the corporate network, because it can access both the Internet and your corporate network at the same time. Attackers can take advantage of this “dual connection” to reach your corporate network through your split-tunneling computer.

The term “split-tunneling” itself is a bit of a misnomer, because there’s only a single “tunnel” here: the encrypted VPN tunnel to your corporate network. The connection to the Internet itself is not “tunneled.” So technically, you don’t have a “split tunnel”; you have a “dual connection.” Regardless, sometimes names for things aren’t rational, so you’ll have to accept the industry standard name for this phenomenon.

Let’s say that you don’t want to deal with the risk of split tunneling when your users are connected to your corporate network over VPN. What do you do? You configure something called “forced tunneling.” When forced tunneling is configured, all traffic is forced to go over the VPN tunnel. If you want to go to a non-corporate website, then that request is going to go over the VPN connection and over your corporate network to your corporate firewalls, and then the corporate firewalls will receive the responses and forward the responses back to you over the VPN connection. There will be no “direct” connections to any Internet servers (with “direct” meaning that the connections avoid going over the VPN connection).

What does this have to do with Azure network security?

The default routes for an Azure Virtual Network allow VMs to initiate traffic to the Internet. This process can pose a security risk because it represents a form of split tunneling, and these outbound connections could increase the attack surface of a VM and be used by attackers. For this reason, you should enable forced tunneling on your VMs when you have cross-premises connectivity between your Azure Virtual Network and your on-premises network.

If you do not have a cross-premises connection, be sure to take advantage of Network Security Groups (discussed earlier) or Azure Virtual Network security appliances (discussed next) to prevent outbound connections to the Internet from your Azure Virtual Machines.

Deploy virtual network security appliances

Although Network Security Groups and User Defined Routes can provide a certain measure of network security at the network and transport layers of the OSI model, in some situations, you’ll want or need to enable security at high levels of the stack. In such situations, you should deploy virtual network security appliances provided by Azure partners.

Azure network security appliances can deliver significantly enhanced levels of security over what is provided by network level controls. Some of the network security capabilities provided by virtual network security appliances include:

  • Firewalling

  • Intrusion detection and prevention

  • Vulnerability management

  • Application control

  • Network-based anomaly detection

  • Web filtering

  • Antivirus protection

  • Botnet protection

If you require a higher level of network security than you can obtain with network-level access controls, then you should investigate and deploy Azure Virtual Network security appliances.

Create perimeter networks for Internet-facing devices

A perimeter network (also known as a DMZ, demilitarized zone, or screened subnet) is a physical or logical network segment that is designed to provide an additional layer of security between your assets and the Internet. The intent of the perimeter network is to place specialized network access control devices on the edge of the perimeter network so that only the traffic you want is allowed past the network security device and into your Azure Virtual Network.

Perimeter networks are useful because you can focus your network access control management, monitoring, logging, and reporting on the devices at the edge of your Azure Virtual Network. Here you would typically enable distributed denial-of-service (DDoS) prevention, IDS and IPS, firewall rules and policies, web filtering, network antimalware, and more. The network security devices sit between the Internet and your Azure Virtual Network and have an interface on both networks.

Although this is the basic design of a perimeter network, many different perimeter network designs exist, such as back-to-back, tri-homed, and multi-homed.

For all high-security deployments, you should consider deploying a perimeter network to enhance the level of network security for your Azure resources.

Use ExpressRoute

Many organizations have chosen the hybrid IT route. In hybrid IT, some of the company’s information assets are in Azure, while others remain on-premises. In many cases, some components of a service are running in Azure while other components remain on-premises.

In the hybrid IT scenario, there is usually some type of cross-premises connectivity. This cross-premises connectivity allows the company to connect their on-premises networks to Azure Virtual Networks. Two cross-premises connectivity solutions are available:

  • Site-to-site VPN

  • ExpressRoute

Site-to-site VPN represents a virtual private connection between your on-premises network and an Azure Virtual Network. This connection takes place over the Internet and allows you to “tunnel” information inside an encrypted link between your network and Azure. Site-to-site VPN is a secure, mature technology that is deployed by enterprises of all sizes. Tunnel encryption is performed by using IPsec tunnel mode.

Although site-to-site VPN is a trusted, reliable, and established technology, traffic within the tunnel does traverse the Internet. In addition, bandwidth is relatively constrained to a maximum of about 200 Mbps.

If you require an exceptional level of security or performance for your cross-premises connections, you should consider using Azure ExpressRoute for your cross-premises connectivity. ExpressRoute is a dedicated WAN link between your on-premises location or an Exchange hosting provider. Because this is a telco connection, your data doesn’t travel over the Internet and therefore is not exposed to the potential risks inherent in Internet communications.

Optimize uptime and performance

Confidentiality, integrity, and availability (CIA) make up the three factors for evaluating a customer’s security implementation. Confidentiality is about encryption and privacy, integrity is about making sure that data is not changed by unauthorized personnel, and availability is about making sure that authorized individuals are able to access the information they are authorized to access. Failure in any one of these areas represents a potential security breach.

Availability can be thought of as being about uptime and performance. If a service is down, information can’t be accessed. If performance is so poor as to make the data unavailable, then you can consider the data to be inaccessible. Therefore, from a security perspective, you should do whatever you can to ensure that your services have optimal uptime and performance. A popular and effective method used to enhance availability and performance is to use load balancing. Load balancing is a method of distributing network traffic across servers that are part of a service. For example, if you have front-end web servers as part of your service, you can use load balancing to distribute the traffic across your multiple front-end web servers.

This distribution of traffic increases availability because if one of the web servers becomes unavailable, the load balancer will stop sending traffic to that server and redirect traffic to the servers that are still online. Load balancing also helps performance, because the processor, network, and memory overhead for serving requests is distributed across all the load balanced servers.

You should consider employing load balancing whenever you can, and as appropriate for your services. The following sections discuss appropriateness situations. At the Azure Virtual Network level, Azure provides you with three primary load balancing options:

  • HTTP-based load balancing

  • External load balancing

  • Internal load balancing

HTTP-based load balancing

HTTP-based load balancing bases decisions about which server to send connections to by using characteristics of the HTTP protocol. Azure has an HTTP load balancer named Application Gateway.

You should consider using Azure Application Gateway when you have:

  • Applications that require requests from the same user or client session to reach the same back-end VM. Examples of this are shopping cart apps and web mail servers.

  • Applications that want to free web server farms from SSL termination overhead by taking advantage of Application Gateway’s SSL offload feature.

  • Applications, such as a content delivery network, that require multiple HTTP requests on the same long-running TCP connection to be routed or load balanced to different back-end servers.

External load balancing

External load balancing takes place when incoming connections from the Internet are load balanced among your servers located in an Azure Virtual Network. The Azure External Load Balancer can provide you with this capability, and you should consider using it when you don’t require the sticky sessions or SSL offload.

In contrast to HTTP-based load balancing, the External Load Balancer uses information at the network and transport layers of the OSI networking model to make decisions on what server to load balance connections to.

You should consider using External Load Balancing whenever you have stateless applications accepting incoming requests from the Internet.

Internal load balancing

Internal load balancing is similar to external load balancing and uses the same mechanism to load balance connections to the servers behind them. The only difference is that the load balancer in this case is accepting connections from VMs that are not on the Internet. In most cases, the connections that are accepted for load balancing are initiated by devices on an Azure Virtual Network.

You should consider using internal load balancing for scenarios that will benefit from this capability, such as when you need to load balance connections to SQL servers or internal web servers.

Global load balancing

Public cloud computing makes it possible to deploy globally distributed applications that have components located in datacenters all over the world. This is possible on Azure due to its global datacenter presence. In contrast to the load balancing technologies mentioned earlier, global load balancing makes it possible to make services available even when entire datacenters might become unavailable.

You can get this type of global load balancing in Azure by taking advantage of Azure Traffic Manager. Traffic Manager makes it possible to load balance connections to your services based on the location of the user.

For example, if the user is making a request to your service from the European Union, the connection is directed to your services located in a European Union datacenter. This part of Traffic Manager global load balancing helps to improve performance because connecting to the nearest datacenter is faster than connecting to datacenters that are far away.

On the availability side, global load balancing ensures that your service is available even if an entire datacenter becomes available.

For example, if an Azure datacenter becomes unavailable due to environmental reasons or outages such as regional network failures, connections to your service would be rerouted to the nearest online datacenter. This global load balancing is accomplished by taking advantage of DNS policies that you can create in Traffic Manager.

You should consider using Traffic Manager for any cloud solution you develop that has a widely distributed scope across multiple regions and requires the highest level of uptime possible.

Disable management protocols to virtual machines

It is possible to reach Azure Virtual Machines by using RDP and SSH protocols. These protocols make it possible to manage VMs from remote locations and are standard in datacenter computing.

The potential security problem with using these protocols over the Internet is that attackers can use various brute-force techniques to gain access to Azure Virtual Machines. After the attackers gain access, they can use your VM as a launch point for compromising other machines on your Azure Virtual Network or even attack networked devices outside of Azure.

Because of this, you should consider disabling direct RDP and SSH access to your Azure Virtual Machines from the Internet. With direct RDP and SSH access from the Internet disabled, you have other options you can use to access these VMs for remote management:

  • Point-to-site VPN

  • Site-to-site VPN

  • ExpressRoute

Point-to-site VPN is another term for a remote access VPN client or server connection. A point-to-site VPN enables a single user to connect to an Azure Virtual Network over the Internet. After the point-to-site connection is established, the user is able to use RDP or SSH to connect to any VMs located on the Azure Virtual Network that the user connected to via point-to-site VPN. This assumes that the user is authorized to reach those VMs.

Point-to-site VPN is more secure than direct RDP or SSH connections because the user has to authenticate twice before connecting to a VM. First, the user needs to authenticate (and be authorized) to establish the point-to-site VPN connection; second, the user needs to authenticate (and be authorized) to establish the RDP or SSH session.

A site-to-site VPN connects an entire network to another network over the Internet. You can use a site-to-site VPN to connect your on-premises network to an Azure Virtual Network. If you deploy a site-to-site VPN, users on your on-premises network are able to connect to VMs on your Azure Virtual Network by using the RDP or SSH protocol over the site-to-site VPN connection, and it does not require you to allow direct RDP or SSH access over the Internet.

You can also use a dedicated WAN link to provide functionality similar to the site-to-site VPN. The main differences are:

  • The dedicated WAN link doesn’t traverse the Internet.

  • Dedicated WAN links are typically more stable and performant.

Azure provides you with a dedicated WAN link solution in the form of ExpressRoute.

Enable Azure Security Center

Azure Security Center helps you prevent, detect, and respond to threats, and provides you with increased visibility into, and control over, the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.

Azure Security Center helps you optimize and monitor network security by:

  • Providing network security recommendations.

  • Monitoring the state of your network security configuration.

  • Alerting you to network-based threats both at the endpoint and network levels.

It is highly recommended that you enable Azure Security Center for all of your Azure deployments.

Extend your datacenter into Azure

Many enterprise IT organizations are looking to expand into the cloud instead of growing their on-premises datacenters. This expansion represents an extension of existing IT infrastructure into the public cloud. By taking advantage of cross-premises connectivity options, it’s possible to treat your Azure Virtual Network as just another subnet on your on-premises network infrastructure.

However, many planning and design issues need to be addressed first. This is especially important in the area of network security. One of the best ways to understand how you approach such a design is to see an example.

Microsoft has created the Datacenter Extension Reference Architecture Diagram and supporting collateral to help you understand what such a datacenter extension would look like. This provides a reference implementation that you can use to plan and design a secure enterprise datacenter extension to the cloud. You should review this document to get an idea of the key components of a secure solution.