Designing High Availability with Microsoft Exchange Server 2010

  • 7/15/2010

Availability Planning for Client Access Servers

Unlike the Mailbox server role and to some extent the Transport server roles, the Client Access Server role does not have any inherent high-availability functionality built in. That does not mean that it was designed without high availability in mind—it just requires other modalities to provide high availability. A separate product or feature is required to provide this functionality. The following sections cover choosing and configuring the best solution depending on deployment requirements.

Client Access Load Balancing and Failover Solutions

To provide Client Access high availability requires multiple Client Access servers to be deployed in the same Active Directory site. As mentioned, there is no integrated mechanism to provide load balancing and failover capabilities if a host becomes unavailable or overloaded. However, a variety of products are available that fill this need. Because the Client Access servers provide so many services with a number of different connections types—from OWA to MAPI to Web Services—three types of Client Access server traffic actually need to be load balanced:

  • Traffic from internal networks

  • Traffic from external (Internet) networks

  • Traffic from other Client Access Servers (proxy)


Some Exchange communications are stateful, meaning the application requires that the communication context be maintained with the same host until the session is completed. This is common in conversations that we have daily. If a co-worker asks what the deadline is for your project and then you walk into another co-worker’s office and say “Wednesday,” she will likely have no idea that you were answering John’s question. This is similar to how a stateful program works: It expects to continue communication with the same context until the conversation is completed. Other protocols are stateless, such as HTTP, where state information is lost between client requests. In the case of multiple, load-balanced hosts, affinity is a mechanism to direct subsequent calls to the host that answered the initial request.

It is important to understand the different types of affinity and how they are used. The Client Access server uses a number of protocols that will need to be load balanced, including HTTP and RPC. Remember some Client Access server protocols require affinity and some do not.

Existing Cookies

Existing cookie affinity uses cookie information transmitted during typical client/server sessions. This type of affinity is only useful for protocols using HTTP and thus not an option for any RPC communication. OWA using forms-based authentication is an example of an application that does use existing or application cookies.

Load Balancer Cookies

Using load balancer cookies is similar to using existing cookies except that the load balancer creates the cookie and does not rely on any existing cookies. As with existing cookies, this is only usable with HTTP. Additionally, the client must support the addition of the load balancer–generated cookie. Exchange ActiveSync, Outlook Anywhere, and some Exchange Web Services do not support this capability. However, Outlook Web App, Exchange Control Panel, and Remote Windows PowerShell are good candidates for this type of affinity.

Source IP

Source IP is perhaps the most common and widely supported type of affinity. With Source IP affinity, the load balancer records a client’s IP address and the initial destination host. All subsequent traffic from that source IP will continue to go to the same destination host for a period of time. However, source IP load balancing has two main drawbacks.

First, affinity breaks when clients change their IP addresses. If you have an environment where this happens frequently, such as mobile clients roaming between wireless networks, this will cause issues. Users may experience symptoms such as having to re-authenticate.

Second, if you have an environment where many clients share the same source IP, such as when a device performing Network Address Translation (NAT) is used, the load will not be evenly distributed because all clients behind the NAT will be routed to the same destination IP address.

SSL Session ID

SSL session ID is generated when establishing an SSL encrypted session. The SSL session ID has a big advantage over source IP affinity: It can uniquely identify clients sharing the same source IP address. Another advantage is that there is no requirement to decrypt the SSL traffic. This is a hard requirement for using client CA because renegotiating the SSL session ID puts additional overhead on the server. Directing traffic to the same server saves processing time and prevents performance impacts.

SSL session ID does not work well with all clients. Some browsers and mobile devices, such as Microsoft Internet Explorer 8.0, create a new SSL session for each browser process. Therefore, every time a user creates a new e-mail message, a separate window opens, which creates a new SSL session. The exception to this is when users use client CA. The same SSL session ID is used for all communication to a specific host.

Outlook Anywhere and some mobile clients also open several Client Access Server sessions. Each session receives a different SSL session ID, so each session could end up being connected to a different server. As discussed earlier, this is not a problem because Windows Server 2008 network load balancing can correlate the RPC_IN_DATA and RPC_OUT_DATA; however, it does cause additional overhead and can negatively impact server performance.

Selecting a Load Balancer Type

To lower cost and complexity, you should select a single load-balancing solution that works for each type of traffic. A large number of load-balancing options are available on the market; it is important to make an informed choice. Consider the following criteria during the decision-making process:

  • Features Does the load balancer have features such as SSL offloading that you will use now and in the future?

  • Manageability How easy is the solution to configure and maintain?

  • Failover detection Does the solution support advanced detection (service awareness) or simple ping (host awareness)?

  • Affinity What options does the solution support to keep client connections returning to the same host?

  • Cost How much will it cost to implement the solution?

  • Scale How does the solution work as the number of hosts increases?

Load balancers can be categorized into four distinct categories: Software Load Balancers, Hardware Load Balancers, Intelligent Firewalls, and Round Robin DNS. The following sections discuss each of these categories.

Software Load Balancing

Windows Network Load Balancing (NLB) has been part of the Windows Server operating system since Windows NT 4.0. Of course, a lot has changed since its early days. NLB can scale to 32 hosts on Windows Server 2008 R2, but the practical limit for Exchange is 8 hosts based on documentation provided about Microsoft’s internal deployment experience. One advantage of NLB is that it is relatively inexpensive to implement.

One disadvantage of NLB is that you cannot use it combined with Windows Clustering. If you are trying to configure an all-in-one server that has the Mailbox role and Client Access Server role, and you are using DAGs, you must use a non-Windows network load-balancing solution for client access. Another drawback is that NLB only supports source IP affinity or no affinity. This may limit its ability to effectively load balance across all of the Client Access protocols. NLB also has no built-in intelligence to test server health or functionality before sending traffic to a host. If the IIS service has stopped on one Client Access server, NLB will continue to send traffic to that node, unless it is reconfigured to stop. This can be partially overcome when NLB is deployed along with Microsoft System Center Operations Manager 2007 R2 and the NLB Management Pack, which may be an option for people that already use Operations Manager.

Other software-based load balancers are installed on a separate server or other hardware. These solutions are often more similar to hardware load balancers or application firewalls than the functionality of NLB.

Hardware Load Balancers

If you need to support more than eight nodes in your Active Directory site, you must consider a hardware load balancer. Having a dedicated piece of specialized hardware allows for the best performance and a considerable number of features. Most hardware load balancers support multiple affinity types, and even allow for the ability to fall back if one type fails. Typically, hardware load balancers support more advanced node health checks. These range from simple ping tests to measuring response times to custom Web pages. More expensive solutions also provide hardware redundancy, further eliminating any single points of failure.

Probably the biggest disadvantage is the cost of deploying a hardware solution. However, for large-scale deployments, this is typically the solution selected.

Application Firewalls

Application (Intelligent) firewalls, such as Microsoft Threat Management Gateway (TMG) or Forefront Unified Access Gateway (UAG), are similar to the hardware load balancer solution, but can also provide additional security features. For example, with Active Directory Domain Services (AD DS) security groups, you can control what time of the day groups of users can access OWA.

One disadvantage is that with this great power comes great complexity. These solutions require more testing and more administration and operational support compared to the other solutions. Another disadvantage is that these do not perform RPC load balancing; in order to do this another solution is also required.

DNS Round Robin

DNS round robin uses DNS’s ability to map multiple hosts to a common name. For example, if you have three Client Access servers the DNS A record entries would look like this:

The first client to request would have the IP address of returned. The second request would have returned, and the third request would have returned. The fourth request would have the first IP address returned again, and the pattern would continue. The main advantage of this is that it has very little or no cost to implement and it’s very easy to configure.

Unfortunately, the limitations of DNS round robin limit its use to lab environments and very small implementations. These limitations include no support for affinity, which requires the application to maintain affinity. For example, a Web browser navigating to will actually use the IP address the DNS server returns from the name resolution query. Internet Explorer will have this DNS entry cached for about 30 minutes. If the server became unavailable during that cache period, the Web browser could not be automatically redirected to the new server. Because of this caching, the Web browser will attempt to reach an unavailable server until its cache expires. DNS round robin also does not have any health checks or dead node removal. In the preceding example, if becomes unavailable, DNS will continue to return the down host’s IP address every third request unless it is manually reconfigured. Another problem is that if multiple clients share the same local DNS server as in a LAN environment, all of those clients will use the same IP address that is cached by the local DNS server; if most of the clients are from the same location, the load will be very balanced across the servers. Finally, changes to DNS can take time to propagate. If a new Client Access server is added to DNS, it will be underutilized until the record propagates fully.

Global Server Load Balancing

Global server load balancing (GSLB), or wide-area load balancing, is a more sophisticated version of DNS round robin available from some hardware load balancer vendors. This solution is typically deployed as a hardware device or even as a feature of a hardware load balancer. This type of load balancing uses DNS to load-balance client connectivity between sites based on a number of factors such as location of the client, response time of the servers, availability of the servers, custom weights, and more. GSLB is typically used in multiple site configurations to provide load balancing between sites. To provide full site redundancy the GSLB device should be located outside of either of the load-balanced sites or deployed in multiple sites. One way to use the GSLB is to load-balance Autodiscover to ensure that it is available even during a single site outage. In Figure 11-12, is set up for GSLB—all traffic will be sent to the IP address for the Denver Autodiscover service. In the event of a failure of Denver, the GSLB device can send all traffic for to the second site.

The GSLB device will accept DNS requests from the client and then return the appropriate IP address based on the rules defined. The TTL for the returned IP address is set low to ensure that changes are received by the client as quickly as possible. As with DNS round robin, because GSLB relies on DNS client resolution, its functionality is limited when the client DNS resolution is uncontrolled.

Figure 11-12

FIGURE 11-12 Using GSLB for the Autodiscover server

Load Balancing Summary

As you can see, you have a variety of solutions to choose from, depending on business requirements and budget. Table 11-4 combines affinity, load balancing, and other considerations when choosing a solution for load balancing.

TABLE 11-4 Load Balancer Comparison







Hardware Load Balancing



All Types

  • Automatic Failover

  • Can be used with Windows Failover Clusters

  • Service Health Checking

  • Cost

  • Complex

Application (Intelligent) Firewall



Source IP Cookie

  • SSL Bridging

  • Enhanced Security

  • AD Authentication

  • Service Health Checking

  • Complex

Software Load Balancing



Source IP

  • Inexpensive

  • Easy to configure

  • Limited Scale

  • Cannot be used with Windows Failover Clusters

  • No Service Health Checking

DNS Round Robin




  • Easy to configure

  • Manual failover

  • Unpredictable traffic

  • Long failover time

Table 11-5 summarizes the configuration needed to support all of the Client Access Server protocols. If the load balancer is used to terminate the SSL certificates, the traffic between the load balancer and the Client Access server will be unencrypted; thus, the unencrypted port is used. Each of the services can be provided with separate load-balanced IP addresses to apply different load-balancing policies to each. For more information about configuring certificates and the internal and external URLs for your Client Access servers see Chapter 4, “Client Access in Exchange 2010.”

TABLE 11-5 Load-Balancing Client Access Services





Exchange ActiveSync



Persistence: Source IP




Outlook Anywhere



Persistence: Source IP

Outlook Web App



Persistence: Cookie or Source IP




RPC Client Access


RPC Ports

Persistence: Source IP

By default the Outlook client will make a connection to the RPC Endpoint Mapping Service on TCP/IP port 135 on the server to negotiate a dynamic RPC port above TCP 1024 for usage. If no firewalls or load balancers are between the clients and servers this is usually not an issue. You can reduce the number ports that need to be load balanced by modifying the Client Access servers to scope down the ports that are required. You must make three modifications:

  1. Modify the registry to statically set the MAPI TCP/IP port on all of the Client Access servers.

    1. Open the Registry editor and then select HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeRpc\ParametersSystem.

    2. Add a DWORD named TCP/IP Port.

    3. Set the value of TCP/IP port to selected port number.

    4. Close the Registry editor.

  2. Modify X:\Program Files\Microsoft\Exchange Server\V14\Bin Microsoft.Exchange.Addressbook.Service.exe.config file to statically assign the Address Book (NSPI) and Referral Service (RFR) TCP/IP port on all of the Client Access servers.

    1. Open X:\Program Files\Microsoft\Exchange Server\V14\Bin Microsoft.Exchange.Addressbook.Service.exe.config in Notepad or another text editor.

    2. In the <appSettings> section locate the line that has <add key=“RpcTcpPort” value=“0” /> and then change the 0 to the selected TCP/IP port.

    3. Save the file and close Notepad.

    4. Restart the Client Access server.

  3. Modify the registry to statically set the MAPI TCP/IP port on all of the Mailbox servers hosting public folders.

    1. Open the Registry editor and then select HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeRPC\ParametersSystem.

    2. Add a DWORD named TCP/IP Port.

    3. Set the value of TCP/IP port to selected port.

    4. Close the Registry editor.

    5. Restart the Mailbox server.

After the load balancer is configured, certificates need to be applied and the internal and external URLs need to be set on each of the Client Access servers.

Creating a Client Access Server Array

Using a load-balancing product will allow you to load-balance connectivity across the Client Access servers for all communication types. To represent the RPC Client Access load-balanced cluster in a single Active Directory site a Client Access array is created. Then the name and IP address for the network load-balanced cluster must be added into the local Domain Name System (DNS). For example, you could add an A record for that points to After adding the DNS record, you can create the Client Access array and assign it to an Active Directory site using the New-ClientAccessArray cmdlet. If mailbox databases are already created in the Active Directory site, you must assign the Client Access array to each of the mailbox databases in the site using the Set-MailboxDatabase cmdlet with the RpcClientAccessServer parameter. To avoid this extra step, you should create the Client Access server array prior to installing any Mailbox servers into the Active Directory site.

A Client Access array exists in a single Active Directory site. Therefore, you need to create a Client Access array in each Active Directory site that will have load-balanced Client Access servers. Also, the Client Access array cannot match the DNS name for the external Outlook Anywhere host name or Outlook will attempt to the Client Access array via RPC before falling back to HTTPS. Because the Client Access server array name is used only for RPC access, any certificates obtained to support Client Access connectivity (OWA, Outlook Anywhere, and so on) don’t need to have the Client Access array name included—RPC communications do not use certificates. For a full discussion of configuring certificates and the internal and external URLs for your Client Access servers, see Chapter 4, “Client Access in Exchange 2010.”

When you put together a Client Access server array with a DAG, a redundant configuration is born. Figure 11-13 shows how an Outlook client will maintain connectivity when a mailbox database failover occurs. The client computer maintains connectivity to the same node in the Client Access server array based on the configuration of the load balancer and that Client Access server will connect to the second Mailbox server to maintain connectivity to the mailbox.

The other scenario where the Client Access server handles a failure is illustrated in Figure 11-14. When the Client Access server fails, the load balancer will reconnect the client computer to another Client Access server in the Client Access server array. The new Client Access Server will then connect to the Mailbox server with the active copy of the database so that the client computer will continue to be connected to the user’s mailbox.

Figure 11-13

FIGURE 11-13 Client connectivity to the Client Access server during a mailbox copy failover

Figure 11-14

FIGURE 11-14 Client connectivity to the Client Access Server during a Client Access server failover