Home > Sample chapters

How to Install, Configure, and Manage the Mailbox Role in Microsoft Exchange Server 2013

Objective 1.3: Deploy and manage high availability solutions for the mailbox role

When deploying a critical platform such as messaging, the assumption is that such systems are going to be available without interruptions. Maintaining such complex systems without any downtime is a real challenge, especially if the platform doesn’t provide native functionality to address maintenance and unexpected downtime scenarios. Exchange 2013 not only continues to provide native functionality, but it also has improved features providing high availability for the mailbox role. Improvements such as support for multiple databases per disk provide for better utilization of disk space, as well as disk IOPS by allowing to store active, passive, and lagged copies of different databases on the same disk. Enhancements, such as automatic log replay, ensure data integrity by allowing page patching on lagged copies.

Creating and configuring a Database Availability Group

In Exchange 2013, DAG is an integral component for both high availability and site resilience. DAG is a group of up to 16 mailbox servers that hosts mailbox database copies and provides automatic database level recovery from failures.

You can host a mailbox database copy on any mailbox server that’s a member of DAG. You must ensure that at least two copies of a database are hosted on separate DAG members to provide high availability. DAG is a boundary for mailbox database replication. So, you can’t create a copy of mailbox database on a mailbox that’s a member of a different DAG.

Exchange 2013 makes deploying DAG easy because it leverages the concept of incremental deployment. The incremental deployment process enables you to install a mailbox server role without requiring a complex cluster setup. You can create DAG with a single Exchange server, and then add more Exchange servers when they are provisioned at a later date. While single server can’t provide high availability, it makes the process of building DAG easier by staging the DAG object and configuration in the environment.

A DAG is created by using the New-DatabaseAvailabilityGroup cmdlet. When creating a DAG, you’re required to provide the DAG name and, optionally, the witness server name and the location of the witness directory where the file share witness data is stored. DAG is created as an empty Active Directory object. When the first mailbox server is added as a DAG member, the directory object is used to store server membership information and DAG configuration settings.

While an administrator isn’t required to create and configure a cluster setup, Exchange 2013 DAG relies on failover clustering components and creates a failover cluster automatically when adding the first server to the DAG. The Cluster Network Object (CNO) is also created in the Active Directory. The CNO is identified with the same name as the name of the DAG you’re creating.

If you need to pre-state a CNO for creating a new DAG, the process requires permissions in Active Directory to be able to create a computer object and assign necessary permissions to it. First, you must create a computer in Active Directory. Next, when you create the computer object, ensure the object name is the name of the DAG you plan to create. Then, for additional security, the recommendation is that you disable the CNO computer object. Finally, after you create the CNO, assign Full Control permissions to Exchange Trusted Subsystem on the CNO. Or, you can assign Full Control permissions only to the first node to be added to the DAG.

After the DAG is created, it’s given a network name and an IP address. The failover cluster-core resource group contains an IP address resource and a network name resource in the cluster core group. The name of the DAG is registered in DNS and is resolvable on the network.

With changes to the failover cluster in Windows Server 2012 R2 and Exchange 2013 Service Pack 1, DAG can be created without an administrative access point. When you create a DAG without an administrative access point, an IP address and a network name aren’t assigned to the cluster. The cluster-core resource group doesn’t contain an IP address and network name resource. A CNO isn’t created in Active Directory and the name of the cluster/DAG isn’t registered in DNS. You can’t use failover cluster administration tools to connect and manage the failover cluster, and you must use the Shell to manage a DAG without an administrative access point.

When creating a DAG with an administrative access point, if the DAG member servers are to be located across different IP subnets within an Active Directory site, or if the members are to be located across different Active Directory and physical sites for site resiliency, then you must provide multiple IP addresses for DAG configuration. You must provide one IP from each subnet the DAG members will be connected to. This allows for DAG to be managed, regardless of which DAG member owns the cluster-core resource group.

When creating a DAG, one of the design considerations is the configuration of a witness server. When using failover cluster, the decision of when to consider a node of cluster out of service relies on the number of votes. Each member of the DAG is considered one vote. When you have a DAG with an even number of nodes, this creates the possibility of a split number of votes, where half the DAG members might vote a node to be out of service and the other half might vote for it to be in service. This can happen when network connectivity between locations is affected and nodes are located across the affected link evenly. In such situations, a tie-breaker mechanism becomes essential. File witness provides that mechanism. When the vote achieves the majority among all failover cluster members, then it’s considered to have reached a quorum.

When a DAG is created, a file witness is required. The file witness might not be required when the DAG has an odd number of members, but failover clustering and DAG failover decision-making automatically accounts for the file witness when it’s necessary. If a file witness isn’t specified, the task of creating a DAG searches for a Client Access server in the local Active Directory site that doesn’t have a mailbox role installed. If one is found, it automatically creates a default directory and shares on that server to be used as a witness file share.

If you specify a witness server and the directory, the task tries to create a specified directory and share automatically. This can succeed if the server is Exchange 2013 server because the Exchange Trusted Subsystem object has required permission to create the directory and share on the server. If the witness server isn’t an Exchange 2013 server, you must add the Exchange Trusted Subsystem group to the local Administrators group on the witness server. If the witness server is the Active Directory domain controller, the Administrators group permissions equate to permissions assigned to domain administrators. To provide required functionality, while assigning the least privilege necessary to do so, it isn’t advisable to co-locate the file witness on the Active Directory domain controller. It is, however, a supported configuration.

The witness server can’t be a server that’s also a member of the same DAG. The witness server is required to be a Windows Server running any version of Windows Server 2003 or later. Using a single witness server for multiple DAGs is possible, but the witness directory must be unique for each DAG.

File witness location also plays an important role in ensuring availability of DAG members. In a single-site DAG configured to provide high availability to all users hosted on database copies protected by a given DAG, the file witness needs to be located into the same physical site location and Active Directory site.

When a DAG is designed to provide site resiliency in a configuration where all active users are located in a primary location, the file witness should be located at the primary site. This ensures that the majority is maintained at a primary site, even when the network link fails between primary and secondary sites.

If you have more than two locations available, Exchange 2013 also supports a locating file witness in a third location. This enables DAG members located in primary and secondary sites to participate in quorum, even when they’re unable to communicate to each other between a primary and secondary site. This design can only work if network links provide robust connectivity between primary/secondary sites to the site where the file witness is located.

When extending DAG across multiple datacenters, you must account for outages and its effect on active users. Exchange administrators often prefer to optimize the use of compute resources available to them. They consider hosting active users from two separate locations on mailbox servers that are part of same DAG. The problem with this design is it can’t provide high availability to all users. When a network outage occurs between two locations, mailbox servers in only one of the two datacenters can achieve quorum. This means the active databases in the datacenter with lost quorum are dismounted and users experience service interruption. Overall design considerations, such as file share witness in the third datacenter, can affect the outcome as well. Without use of the third site for file share witness, it’s better to deploy two separate DAGs with the majority of mailbox servers for each DAG located where the majority of active users are located. This can ensure that a lost quorum at any site affects the least possible users.

Once the DAG is created, the next step is to add a server to the DAG. You can add a server to the DAG using EAC or by running the Add-DatabaseAvailabilityGroupServer cmdlet. When you add the first server to the DAG, the CNO computer account for DAG is automatically enabled.

Existing databases aren’t highly available until additional copies of the databases are created manually. You can use the EAC or run the Add-MailboxDatabaseCopy cmdlet to add copies to a database. A new database copy can be created on any mailbox server within the same DAG. A mailbox server can host only one active or passive copy of a database. Circular logging must be disabled before creating the first copy of the database. You can enable circular logging after adding the first copy of the database. Circular logging doesn’t need to be disabled when adding additional copies of the database.

Identifying failure domains

Any solution is as strong as its weakest component. To provide the best possible availability using DAG, you must account for all the components that affect it and how to remove the points of failure by providing redundancy. Another factor to consider when designing such highly available solutions is the cost. As you add more components and more redundancy to account for the failure of each component, the cost of the solution increases. Good designs always account for striking an optimal balance between availability and cost based on the available budget. Discussions of the cost benefit of a design is beyond the scope of this book. This section covers each failure domain and its impact on the high availability of Exchange 2013.

When looking at possible failures, you need to look at software, hardware, and environmental factors that can affect the operations of a component and the service itself. For example, the failure of a disk might affect only a database, while Exchange 2013 continues to function. Whereas, the failure of the Top of the Rack (ToR) router might affect more than one server in a given rack.

Starting at server level failures, the server hardware, such as the power supply, network, and disk, are the most common factors. Most of the modern server class hardware now supports and commonly ships with redundant power supplies and network cards. But they still need to be configured properly to provide resiliency against failures. If a power supply fails on a server equipped with two power supplies, the server continues to function. But what if both power supplies were connected to the same power feed? What if the feed itself fails?

The same concept applies to the network and disks. If your network adapters are teamed, you need to make sure they’re connected to separate switches and that the switches are configured properly. For storage, a common practice is to protect against disk failures by using different RAID levels.

As you might have noticed, to protect against failures, adding more components to provide redundancy also raises the cost of the solution.

Exchange 2013 is designed to provide the best cost efficiencies, while reducing the cost where possible. When DAG is configured and databases have multiple copies configured appropriately, a single power supply failure or a network card failure would affect the given server, but not the service to the user. This provides the administrator with the flexibility to fix the failed component, while the end user is provided with uninterrupted service.

With the use of spare disks and auto-reseed functionality, Exchange 2013 automates the process that the administrator would have to manually perform otherwise. When combined with at least three copies of a database, Exchange 2013 supports JBOD configuration, eliminating the need for RAID and extra disks needed for the RAID configuration, reducing the cost of the required hardware and providing automation with the automatic reseed functionality.

For a single datacenter deployment, it’s best to ensure that servers from the same DAG are located in different racks, have connectivity to different network switches, and are connected to a separate power feed. This ensures that any power, network or other rack-level failures, or human errors won’t affect the entire DAG and surviving components are sufficient to provide uninterrupted service to end users.

Another benefit of such a design is that, with proper capacity planning, Exchange administrators can remove a mailbox server from a service to perform required maintenance without interrupting service.

You also need to account for other external dependencies, such as file share witness. While it’s possible to configure alternate witness server and alternate witness directory, if witness server fails, DAG doesn’t automatically use alternate witness server and directory. Alternat the witness server and directory, if preconfigured, can only help speed site failovers, but not without manual intervention.

While a Distributed File System (DFS) might seem like a good solution to provide redundancy, it’s possible that the replication of the latest data hasn’t completed to the file-share witness copy that Exchange server might connect to. So, it isn’t supported to use DFS to host the file-share witness directory.

Managing DAG networks

In Exchange 2010, when you create a DAG, the initial DAG networks were created automatically based on the subnets enumerated by the cluster service. For environments where the interfaces for a specified network (MAPI or Replication network), were on multiple subnets, multiple redundant DAG networks were created and the administrator had to perform the procedure manually to collapse the networks into their appropriate function, such as MAPI or Replication network. The process was many times overlooked, resulting in unexpected DAG behavior, such as incorrect network being used for replication.

In Exchange 2013, network management is automated. For automatic DAG network configuration to work correctly, MAPI networks must be registered in DNS and, in a routed network with multiple subnets, must also have a default gateway assigned. Replication networks must not be registered in DNS and they must not have a default gateway assigned to the interface. If these prerequisites are met, DAG network configuration is performed automatically and collapsing DAG networks is no longer necessary.

DAG network management is also automatic in Exchange 2013 by default. DAG networks aren’t visible in EAC and can only be viewed using Shell. To view, edit, and otherwise manually manage DAG networks, you must configure DAG for manual network configuration. You can do so by checking the Configure Database Availability Group Networks Manually check box on the DAG object using EAC, or you can run the Set-DatabaseAvailabilityGroup cmdlet with the ManualDagNetworkConfiguration parameter set to $true.

Be default, DAG can use MAPI networks, not only for service client traffic, but also for replication traffic, if needed. If you create a dedicated replication network and want to force all of the replication to take place on a dedicated network when the link is operational, you can run the Set-DatabaseAvailabilityGroupNetwork cmdlet and set the value of the ReplicationEnabled parameter to $false on the MAPI network. This enables you to force replication traffic over a dedicated replication network, but this setting is ignored if DAG can’t replicate data over replication networks, and the only available replication path is the MAPI network.

When using a dedicated iSCSI networks for data storage, it is important to segregate the iSCSI traffic from MAPI and replication traffic. After a manual network configuration is enabled on DAG, you can set the IgnoreNetwork parameter to $true.

Managing mailbox database copies

After creating a DAG, the first step to provide high availability for a database is to add the passive copy of an existing database. For any given database, you can have up to 15 maximum passive copies due to the maximum of 16 servers limit per DAG.

After adding a database copy, DAG starts the initial copy of the database from the active copy. This process is known as seeding. While this automated process works for most environments, you might want the seeding process not to start automatically. When creating a database copy, you can set the value of the SeedingPostponed parameter to $true. This suspends the seeding operation on the creation of the database copy and you must explicitly seed the database copy.

A database on DAG might have more than one copy. When a passive copy of the database needs to be activated, many factors are taken into account by the best copy selection (BCS) process. One of the factors is the activation preference parameter, which can be set by the administrator. You can set the value of the ActivationPreference parameter on a database copy with the value of 1 or higher. The value of this parameter can’t be more than the number of database copies of a given database.

While the activation preference helps define in which order a database copy is to be selected for activation, it’s by no means a binding order for the BCS process of Exchange 2013. BCS takes into account multiple factors, such as the replay queue length and protocol health to ensure that not only is the selected database copy healthy, but that the server where the database copy is hosted is also healthy to ensure that a failover operation doesn’t cause service interruption.

When creating a database copy, you can also opt to create a lagged copy of the database. A lag is defined by the ReplayLagTime and TruncationLagTime parameters. The maximum allowed lag time is 14 days. A lagged copy can be used as a backup. An administrator can roll back to a point in time within the configured lag window by manipulating log files that haven’t been played back into the lagged database copy.

Exchange 2013 also introduces logic to improve service availability. If the database has a bad page, Exchange 2013 plays forward the lagged database copy over the risk of losing data. The play forward logic also applies to copies of the database, which might not have enough space to keep all the logs or be the only database copy available for activation.

When a database without an additional copy is using circular logging, it uses JET circular logging. When a database with multiple copies on the DAG uses circular logging, it uses the continuous replication circular logging (CRCL) instead of JET circular logging. To account for this difference in the circular logging method, circular logging must be disabled on the database before creating another copy of the database. After creating a second copy of the database, circular logging can be enabled, which then uses CRCL. When you create subsequent copies of the database, you don’t need to disable circular logging because it’s already configured to use CRCL, not JET circular logging.

After a database copy is created, over its lifecycle, it can have one of many states. The Get-MailboxDatabaseCopyStatus cmdlet enables you to view whether the database copy is healthy and what state it’s in. For example, a newly created database copy can be in the Initializing state. While in this state, the system is verifying the database and log stream are in a consistent state. A database can also be in this state during the transition from one state to another.

When a database copy fails to activate, Exchange 2013 provides the ability to test the replication health for the given database by running the Test-ReplicationHealth cmdlet. This cmdlet performs tests against many components that can affect database replication health such as cluster service, replay service, and active manage. The Test-ReplicationHealth cmdlet is an important tool in an Exchange administrator’s troubleshooting toolbox.

Let’s take a look at some important mailbox database copy maintenance tasks.

Activate a mailbox database copy

When you’re performing maintenance on a server hosting an active copy of a database, you can initiate a switchover of the active database to another server by activating a passive copy. The database switchover process dismounts the current active database and mounts the database copy on the specified server. The operation succeeds if the database copy on the selected server is healthy and current. The following example moves the active database named DB1 to server Server2.

Activating database DB1 on Server2

Move-ActiveMailboxDatabase DB1 –ActivateOnServer Server2

Besides simply activating a database copy on a different server, the Move-ActiveMailboxDatabase cmdlet also enables you to override the AutoDatabaseMountDial setting, as well as skip one of the many health check requirements. These parameters allow the administrator to provide service at a degraded level, instead of taking the database offline if the requirements, the health checks, or the database mount dial aren’t met.

Activate a lagged mailbox database copy

Activating a lagged copy can be an operation preferred during times when a lagged copy is the only copy available, or a lagged copy can be activated when a point in time recovery is needed when other database copies are affected by database corruption.

If you want to activate a lagged copy simply to make it current, and activate it when all other database copies are unavailable, the operation is a simple process of replaying all the log files to make the database copy current. One important factor that must be considered for this operation is the time it takes to replay all of the log files into the database before the copy can be activated. The number of log files depends on the configured lag for the copy. If the database copy is configured for a 14-day lag, it can have up to 14 days of logs for a given database that must be replayed before the copy can be activated. The log replay rate is dependent on the storage architecture and the server resource configuration.

The process of activating a lagged copy and replaying all of the logs requires you to carry out the following steps.

  • Suspend replication for the lagged copy being activated:

    Suspend-MailboxDatabaseCopy DB1\Server3 -SuspendComment "Activate lagged copy of
    DB1 on Server3" -Confirm:$false
  • Make a copy of the database copy and log files. While this is an optional step, it’s a highly recommended best practice because it protects you from losing the only lagged copy that might be needed if the next step of replaying log files fails.
  • Activate lagged copy and replay all log files:

    Move-ActiveMailboxDatabase DB1 -ActivateOnServer Server3 -SkipLagChecks

If a lagged copy needs to be activated for recovery to a certain point in time, the process is a bit different than activating copy to make it current. For example, a database copy is configured for a 14-day lag. For example, active copy of the database is found to be corrupt and it’s determined that the corruption caused data loss 10 days ago. In this case, the lagged copy only needs to account for four days of logs that need to be replayed.

The process of activating lagged copy to a specific point in time requires you to carry out the following steps.

  1. Suspend replication for the lagged copy being activated:

    Suspend-MailboxDatabaseCopy DB1\Server3 -SuspendComment "Activate lagged copy of
    DB1 on Server3" -Confirm:$false
  2. Make a copy of the database copy and log files. While this is an optional step, it’s a highly recommended best practice because it protects you from losing the only lagged copy that might be needed if the next step of replaying log files fails.
  3. Remove all log files past the determined point of recovery (all log files past the first four days in the previous example).
  4. Delete the checkpoint file for the database.
  5. Perform the recovery using eseutil. This process replays the log files into the database and can take a considerable amount of time. Many factors affect how long the process can take, such as the number of log files to be replayed, the hardware configuration of the server performing the replay operation, and the disk configuration, which can determine the speed at which the logs can be replayed. In the following example, eXX should be replaced with the actual log generation prefix, such as E00.

    Eseutil /r eXX /a
  6. After the log replay finishes successfully, the database is in a clean shutdown state, and it can be copied and used for a recovery purpose. The best practice for the recovery of lost data is to mount the database files in a recovery database created for the recovery purpose. You can create a recovery database using the New-MailboxDatabase cmdlet with the Recovery parameter.
  7. You can extract recovered data by mounting the recovery database and using the mailbox restore request to export the data to a PST or merge into an existing mailbox.
  8. After the recovery process is complete, resume replication for the database:

    Resume-MailboxDatabaseCopy DB1\Server3

Instances might occur when you need to activate a lagged database copy and request redelivery of missing emails from SafetyNet. This is similar—but improved—to the transport dumpster feature in Exchange 2013. By default, SafetyNet stores copies of the successfully processed messages for two days.

While the majority of steps to activate a lagged copy and request redelivery of messages from SafetyNet are similar to the previously mentioned process for point-in- time recovery, the following mentions certain differences.

  • After suspending the lagged copy, you need to determine the logs required to bring the database to a clean shutdown state. You can determine these required logs by running eseutil and by inspecting the database header:

    Eseutil /mh <DBPath> | findstr /c:"Log Required"
  • Note HighGeneration number from the output. The HighGeneration number indicates the last log file required. Move all log files with a generation number higher than HighGeneration to a different location, so they’re not played back in the database when it’s activated.
  • On the server hosting the active copy of the database, stop the Microsoft Exchange Replication service, so the logs don’t continue replicating to the lagged copy.
  • Perform the database switchover and activate the lagged copy:

    Move-ActiveMailboxDatabase DB1 -ActivateOnServer Server3 -MountDialOverride
    BestEffort -SkipActiveCopyChecks -SkipClientExperienceChecks -SkipHealthChecks
    –SkipLagChecks
  • After activation, the database automatically requests redelivery of the missing messages from SafetyNet.

Move the mailbox database path

After a database is created, if you need to move the database files from their current location to a new location, the actual steps might seem as simple as editing the database path using EAC or the Move-DatabasePath cmdlet.

But this is both a disruptive and a time-consuming operation. It’s disruptive because when the database path is changed, the database is automatically dismounted for the move operation and it’s mounted after files are copied to new location. The database is unavailable for the duration of the operation and users can’t connect to their mailboxes. You can’t change the database path on a replicated database. To move the database path of a replicated database, you must first remove all copies of the database, essentially bringing the database down to a single copy, then perform the move operation, and then add copies of the database.

For database copies, you need to create a new folder structure in the same path as the primary database’s new location and manually copy all the files from the old location to the new location before adding the database copies again. This saves time and effort because seeding isn’t required again. Only log files generated after removing the database copies need to be replicated.

Update a mailbox database copy

Updating a mailbox database copy is essentially a seeding operation. The seeding process creates a baseline copy of the active database, which then continues replication of additional logs as they’re generated. When creating a database copy, you can suspend the initial seeding operation. This creates a database copy, but it doesn’t actually copy any data to create the baseline copy. A database copy that had its seeding suspended is eventually required to be updated before it can be considered a healthy copy that can be used as a switchover or failover target.

Updating to a database copy is also required when a database copy becomes diverged or is unrecoverable. Disk failure hosting a database copy is one such example where the database is unrecoverable and must be updated to create database baseline and resume replication. Other events requiring seeding of a database copy are corrupt log file that can’t be replayed into the passive database copy and when offline defragmentation is performed on any copy of the database, after the log generation sequence of database is reset back to 1.

Before you can update a database copy, you must suspend it. When starting an update, you can select which database copy should be used as a source. This is helpful in situations where a passive database copy is local to the copy being updated and might be a better option than seeding from the active copy, which could be hosted in a different datacenter. While passive copy can be used for seeding, after an update is complete, log replication always uses active copy as its source. It’s also possible to select which network should be used for seeding.

Here’s an example where a database copy for database DB3 needs to be updated. If you use Server3, which has a healthy copy of the database as a source, the command to update the database copy is as follows.

Update-MailboxDatabaseCopy -Identity DB3\Server2 -SourceServer Server2

While using EAC or the previously mentioned cmdlet enables you to update a database copy online, you can also opt for an offline copy process to update a mailbox database copy. When you manually copy the offline database, you first need to dismount the copy of the active database. This results for in-service interruption for users hosted on the database. The process for the manual copy method is as follows:

  • After you dismount the active copy of the database, copy the database and log files to a shared network location or an external drive.
  • Mount the database to restore service to users.
  • On the server that will host database copy, copy the database files from the shared network location or external drive to the same database path as the active database copy.
  • Add the database copy with the SeedingPostponed parameter.

Objective summary

  • DAG provides high availability when multiple database copies are created for the database the mailbox is hosted on. Creating a DAG and adding a mailbox server to DAG doesn’t automatically provide redundancy to the databases located on the mailbox servers.
  • Failovers or switchovers can cause resource overconsumption if the number of failures in a DAG aren’t accounted for in the design. For example, if a mailbox server is designed for 5,000 active users and two servers in the DAG fail, resulting in two databases from other servers, each containing 3,000 users, can result in the mailbox server going over its designed limit of resource usage. This can have adverse effects on the service levels from degraded performance and slow response times or worse.
  • When a DAG is designed to stretch across multiple sites, the file-share witness location is critical to ensuring high availability. The best practice is to locate the majority of the mailbox servers on the primary site. If users are located across multiple sites evenly, you should separate DAGs for each major site.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.

  1. You are an Exchange administrator for Contoso, Ltd. You need to deploy a DAG without an administrative access point. What should you do?

    1. Deploy mailbox servers on Windows Server 2008 R2.
    2. Deploy mailbox servers on Windows Server 2012 R2.
    3. Deploy mailbox servers on Windows Server 2012.
  2. You are an Exchange administrator for Contoso, Ltd. You need to move databases from the system drive to a new data drive. Select the required steps to complete the task successfully.

    1. Enable circular logging on the database.
    2. Disable circular logging on the database.
    3. Run Dismount-Database cmdlet.
    4. Run Move-DatabasePath cmdlet.
  3. You are an Exchange administrator for Contoso, Ltd. You need to configure DAG networks to exclude SCSI network from replication. Which steps must you take? Select all that apply.

    1. Run Set-DatabaseAvailabilityGroup and set ManualDagNetworkConfiguration to $false
    2. Run Set-DatabaseAvailabilityGroup and set ManualDagNetworkConfiguration to $true
    3. Set-DatabaseAvailabilityGroupNetwork cmdlet and set IgnoreNetwork to $true
    4. Set-DatabaseAvailabilityGroupNetwork cmdlet and set IgnoreNetwork to $false