Design and Implement an Azure Storage Strategy

  • 3/11/2015

Objective 4.6: Implement SQL databases

In this section, you learn about Microsoft Azure SQL Database, a PaaS offering for relational data.

Choosing the appropriate database tier and performance level

Choosing a SQL Database tier used to be simply a matter of storage space. Recently, Microsoft added new tiers that also affect the performance of SQL Database. This tiered pricing is called Service Tiers. There are three service tiers to choose from, and while they still each have restrictions on storage space, they also have some differences that might affect your choice. The major difference is in a measurement called database throughput units (DTUs). A DTU is a blended measure of CPU, memory, disk reads, and disk writes. Because SQL Database is a shared resource with other Azure customers, sometimes performance is not stable or predictable. As you go up in performance tiers, you also get better predictability in performance.

  • Basic Basic tier is meant for light workloads. There is only one performance level of the basic service tier. This level is good for small use, new projects, testing, development, or learning.
  • Standard Standard tier is used for most production online transaction processing (OLTP) databases. The performance is more predictable than the basic tier. In addition, there are four performance levels under this tier, levels S0 to S3.
  • Premium Premium tier continues to scale at the same level as the standard tier. In addition, performance is typically measured in seconds. For instance, the basic tier can handle 16,600 transactions per hour. The standard/S2 level can handle 2,570 transactions per minute. The top tier of premium can handle 735 transactions per second. That translates to 2,645,000 per hour in basic tier terminology.

There are many similarities between the various tiers. Each tier has a 99.9 percent uptime SLA, backup and restore capabilities, access to the same tooling, and the same database engine features. Fortunately, the levels are adjustable, and you can change your tier as your scaling requirements change.

The management portal can help you select the appropriate level. You can review the metrics on the Monitor tab to see the current load of your database and decide whether to scale up or down.

  1. Click the SQL database you want to monitor.
  2. Click the Monitor tab, as shown in Figure 4-10.

    Figure 4-10

    FIGURE 4-10 The Monitor tab

  3. Add the following metrics:

    • CPU Percentage
    • Physical Data Reads Percentage
    • Log Writes Percentage

All three of these metrics are shown relative to the DTU of your database. If you reach 80 percent of your performance metrics, it’s time to consider increasing your service tier or performance level. If you’re consistently below 10 percent of the DTU, you might consider decreasing your service tier or performance level. Be aware of momentary spikes in usage when making your choice.

In addition, you can configure an email alert for when your metrics are 80 percent of your selected DTU by completing the following steps:

  1. Click the metric.
  2. Click Add Rule.
  3. The first page of the Create Alert Rule dialog box is shown in Figure 4-11. Add a name and description, and then click the right arrow.

    Figure 4-11

    FIGURE 4-11 The first page of the Create Alert Rule dialog box

  4. On the next page of the Create Alert Rule dialog box, shown in Figure 4-12, select the condition and the threshold value.

    Figure 4-12

    FIGURE 4-12 The second page of the Create Alert Rule dialog box

  5. Select your alert evaluation window. An email will be generated if the event happens over a specific duration. You should indicate at least 10 minutes.
  6. Select the action. You can choose to send an email either to the service administrator(s) or to a specific email address.

Configuring and performing point in time recovery

Azure SQL Database does a full backup every week, a differential backup each day, and an incremental log backup every five minutes. The incremental log backup allows for a point in time restore, which means the database can be restored to any specific time of day. This means that if you accidentally delete a customer’s table from your database, you will be able to recover it with minimal data loss if you know the timeframe to restore from that has the most recent copy.

The length of time it takes to do a restore varies. The further away you get from the last differential backup determines the longer the restore operation takes because there are more log backups to restore. When you restore a new database, the service tier stays the same, but the performance level changes to the minimum level of that tier.

Depending on your service tier, you will have different backup retention periods. Basic retains backups for 7 days, standard for 14 days, and premium for 35 days. In most cases, 14 days is enough time to determine that you have a problem and how to correct it.

You can restore a database that was deleted as long as you are within the retention period. Follow these steps to restore a database:

  1. Select the database you want to restore, and then click Restore, as shown in Figure 4-13.

    Figure 4-13

    FIGURE 4-13 The Restore button

  2. The Restore dialog box opens, as shown in Figure 4-14.

    Figure 4-14

    FIGURE 4-14 The Restore dialog box

  3. Select a database name.
  4. Select a restore point. You can use the slider bar or manually enter a date and time.
  5. You can also restore a deleted database. Select the Deleted Databases tab, as shown in Figure 4-15.

    Figure 4-15

    FIGURE 4-15 The Deleted Databases tab for SQL databases in the management portal

  6. Select the database you want to restore.
  7. Click Restore as you did in step 1.
  8. Specify a database name for the new database.
  9. Click Submit.

Enabling geo-replication

Every Azure SQL Database subscription has built-in redundancy. Three copies of your data are stored across fault domains in the datacenter to protect against server and hardware failure. This is built in to the subscription price and is not configurable. You can configure two more fault-tolerant options: standard geo-replication and active geo-replication.

Standard geo-replication allows the user to fail over the database to a different region when a database is not available. It is available on the standard and premium service tiers. The main difference between active and standard geo-replication is that standard geo-replication does not allow clients to connect to the secondary server. It is offline until it’s needed to take over for the primary. The target region for the offline secondary server is pre-determined. For instance, if your primary server is in North Central US, then your secondary server will be in South Central US. The source and target servers must belong to the same subscription.

Creating an offline secondary database (existing portal)

Follow these steps to configure an offline secondary database:

  1. Click the Geo-Replication tab for the database, as shown in Figure 4-16, and click Add Secondary.

    Figure 4-16

    FIGURE 4-16 Replication properties

  2. On the Specify Secondary Settings page, shown in Figure 4-17, select a server from the server list or click New SQL Database Server, and then click the right arrow.

    Figure 4-17

    FIGURE 4-17 Creating a new secondary for geo replication

  3. If you select a new server, the SQL Database Server Settings page opens (see Figure 4-18). Enter a login name and password, and select the Allow Windows Azure Services To Access The Server check box.

    Figure 4-18

    FIGURE 4-18 The SQL Database Server Settings page

  4. Monitor the Geo-Replication page for the progress of building the new secondary. You can watch the Replication Status of the database switch from Pending to Active.

    If there is a datacenter failure, the same page shows the Replication Status of your database as Unavailable. You will also see the Failover Enabled property set to true for the database and be able to initiate a failover by clicking Failover on the command bar.

Creating an offline secondary database (Preview portal)

To create an offline secondary database in the Preview portal, follow these steps:

  1. Navigate to your SQL database in the management portal accessed via
  2. Scroll to the Geo Replication section, and click the Configure Geo Replication box.
  3. On the Geo Replication blade, select your target region.
  4. On the Create Secondary blade, click Create.

Creating an online secondary database (existing portal)

There are some differences between standard geo-replication and active geo-replication. Active geo-replication is different in these ways:

  • You can have four secondary copies of your database.
  • It is available only at the premium service tier.
  • The online secondary will be consistent with the primary eventually.
  • Of the four secondary copies of the database, four can be active, or three can be active and one can be an offline secondary.
  • The online secondary server is readable. This allows you to put reports or ETL processes on the secondary, freeing up the locking overhead on the primary. Since the secondary copies are located in different regions, you can put readable databases close to remote users.

Before you create an online secondary, the following requirements must be met:

  • The secondary database must have the same name as the primary.
  • They must be on separate servers.
  • They both must be on the same subscription.
  • The secondary server cannot be a lower performance tier than the primary.

The steps for configuring an active secondary is the same as creating an offline secondary, except you can select the target region, as shown in Figure 4-19.

Figure 4-19

FIGURE 4-19 The New Secondary For Geo Replication dialog box for creating an active secondary

Creating an online secondary database (Preview portal)

If your SQL database is a Premium database, you will be able to create an online secondary. To create an online secondary in the Preview portal, follow these steps:

  1. Navigate to your SQL database in the management portal accessed via
  2. On the Create Secondary blade, change the Secondary Type to Readable.
  3. Click Create to create the secondary.

Creating an online or offline secondary with Windows PowerShell

Creating an online or offline secondary can be done with Windows PowerShell using the Start-AzureSqlDatabaseCopy cmdlet.

To create an online secondary, use the following command:

Start-AzureSqlDatabaseCopy -ServerName "SecondarySrv" -DatabaseName "Flashcards"
-PartnerServer "NewServer" –ContinuousCopy

To create an offline secondary, use the following command:

Start-AzureSqlDatabaseCopy -ServerName "SecondarySrv" -DatabaseName "Flashcards"
-PartnerServer "NewServer" –ContinuousCopy -OfflineSecondary

Importing and exporting data and schema (existing portal)

Importing and exporting data and schema from a SQL database is essential for a number of situations, including creating a local development or test copy, moving a database to a local instance of SQL Server, or archiving data before you clean up a production database.

To export a database, follow these steps:

  1. In the management portal, click the database you want to export.
  2. On the task bar, click Export.
  3. Enter values for the following:

    • FileName
    • Subscription
    • Blob Storage Account
    • Container
    • Login Name
    • Password

    This will create a BACPAC file that can be used to create a database with either an on-premises SQL server, a SQL server in an Azure VM or in Azure SQL Database.

    To import the BACPAC into Azure SQL Database, perform the following steps:

  4. Click New, Data Services, SQL Database, Import.
  5. Click the folder under the BACPAC URL to navigate to the BACPAC file stored in the storage account.
  6. Click Open.
  7. Enter the following information:

    • Subscription
    • Service Tier
    • Performance Level
    • Server
  8. Click the Next arrow.
  9. Enter the login details for the new server.
  10. Click the check mark. Your new database appears online shortly.

The import process is faster if you use the standard service tier and at least the S2 performance level.

Importing and exporting data and schema (Preview portal)

The Preview portal does not currently support importing and exporting data and schema.

Objective summary

  • The different editions of Azure SQL Database affect performance, SLAs, backup/restore policies, pricing, geo-replication options, and database size.
  • The edition of Azure SQL Database determines the retention period for point in time restores. This should factor into your backup and restore policies.
  • It is possible to create an online secondary when you configure Azure SQL Database geo-replication. It requires the Premium Edition.
  • If you are migrating an existing database to the cloud, you can use the Azure management portal to move schema and data into your Azure SQL database.

Objective review

Answer the following questions to test your knowledge of the information in this objective. You can find the answers to these questions and explanations of why each answer choice is correct or incorrect in the “Answers” section at the end of this chapter.

  1. Which of the following is not a requirement for creating an online secondary for SQL Database? (Choose all that apply.)

    1. The secondary database must have the same name as the primary.
    2. They must be on separate servers.
    3. They both must be on the different subscription.
    4. The secondary server cannot be a lower performance tier than the primary.
  2. Which metrics should you add to monitoring that will help you select the appropriate level of SQL Database?

    1. CPU Processor Count
    2. CPU Percentage
    3. Physical Data Reads Percentage
    4. Log Writes Percentage
  3. From what you know about SQL Database architecture, what should you include in your client application code?

    1. Connection resiliency, because you could failover to a replica.
    2. Transaction resiliency so you can resubmit a transaction in the event of a failover.
    3. Query auditing so you can baseline your current query times and know when to scale up the instance.
    4. A backup and restore operation for the database.