Home > Sample chapters

Provisioning databases

Adding databases to a SQL Server instance

Now that your SQL Server is installed and SQL Server features are configured, it is time to put your SQL Server instance to use. In SQL Server Management Studio, you’ll see four system databases there already, plus additional databases for SQL Server Reporting Services and SQL Server Integration Services perhaps if you have installed these features. Now it is time to create user databases.

We have discussed a number of database configurations in Chapter 3, including the physical configuration of files and storage.

For information on Azure SQL Databases, refer to Chapter 5. The remainder of this chapter refers to SQL Server instance databases.

Considerations for migrating existing databases

As an administrator, you’ll be faced with the need to move a database from one instance to another, perhaps for the purposes of refreshing a preproduction environment, moving to a new SQL Server instance, or promoting a database into production for the first time.

When copying a database into a new environment, keep in mind the following:

  • Edition

  • Version and compatibility mode

  • SQL logins

  • Database-scoped configurations

  • Database settings

  • Encryption

Let’s look at each of these in more detail.

Edition

Generally speaking, databases progress upward in terms of cost and feature set, beginning with Express edition, Web, Standard, and finally Enterprise edition. (Developer edition is the same as Enterprise edition, except for the ability to use it in a production environment.) Moving a database up from Express, Web, or Standard edition expands the features available for use in the database.

The concern for DBAs is when databases need to move down from Enterprise, Standard, or Web edition. A large number of features that had historically been exclusive to Enterprise edition were included in Standard edition for the first time with SQL Server 2016 SP1, expanding what we can do with Standard edition as developers and administrators.

You will encounter errors related to higher-edition features when restoring or attaching to an instance that does not support those editions. For example, when attempting to restore a data-compressed database to tables to an instance that does not support data compression, you will receive an error message similar to “cannot be started in this edition of SQL Server because part or all of object ‘somecompressedindex’ is enabled with data compression.” In this case, you will need to manually remove data compression from the database in the source instance and then create a new backup or detach the database again before migrating to the lower-edition instance. You cannot turn off the use of higher-edition features on the lower-edition instance.

You can foresee this problem by using a dynamic management view that lists all edition-specific features in use. Keep in mind that some features are supported in all editions but are limited. For example, memory-optimized databases are supported even in Express edition but with only a small amount of memory that can be allocated.

For example,

USE [WIDEWORLDIMPORTERS];
SELECT FEATURE_NAME
FROM SYS.DM_DB_PERSISTED_SKU_FEATURES;

returns the following rows:

feature_name
Compression
Partitioning
ColumnStoreIndex
InMemoryOLTP

Version and compatibility mode

SQL Server databases upgraded from an older version to a new version will retain a prior compatibility mode. Compatibility mode is a database-level setting.

For example, restoring or attaching a database from SQL 2012 to SQL 2017 will result in the database assuming the SQL 2012 (11.0) compatibility mode inside the SQL 2017 environment. This is not necessarily a problem, but it does have consequences with respect to how you can use features and improvements and potentially how it performs.

You can view the compatibility level of a database in SQL Server Management Studio. To do so, in Object Explorer, right-click a database, and then, on the shortcut menu, click Properties. In the pane on the left, click Options. On the Options page, the Compatibility Level list box displays the current setting. You can change that setting or use the ALTER DATABASE command to change the COMPATIBILITY_LEVEL setting. You can also view this setting for all databases in the system catalog via sys.databases; look for the compatibility_level column.

SQL Server provides database compatibility modes for backward compatibility to database-level features, including improvements to the query optimizer, additional fields in dynamic management objects, syntax improvements, and other database-level objects.

For the scenario in which a SQL Server 2017 (internal version 140) instance is hosting a database in SQL Server 2012 (internal version 110) compatibility mode, it is important to note that applications are still connecting to a SQL Server 2017 instance. Only database-level features and options are honored in the prior compatibility modes.

For example, some recent syntax additions such as the new STRING_SPLIT() or OPENJSON functions, added in SQL Server 2016, will not work when run in the context of a database in a prior compatibility mode. Some syntax improvements, such as DATEFROMPARTS() and AT TIME ZONE, will work in any database in any compatibility mode in SQL Server 2017.

SQL Server 2017 supports compatibility levels down to SQL Server 2008 (internal version 100).

Changing the database compatibility level does not require a service restart to take place, but we strongly recommend that you do not perform this during normal operating hours. Promoting the database compatibility mode should be thoroughly tested in preproduction environments. Even though syntax errors are extremely unlikely, other changes to the query optimizer engine from version to version could result in performance changes to the application that must be evaluated prior to rollout to a production system. When you do upgrade production from a prior compatibility level, you should do so during a maintenance period, not during user activity.

arrow.jpg For more information on the differences between compatibility modes since SQL 2005, reference MSDN article bb510680 or visit https://docs.microsoft.com/sql/t-sql/statements/alter-database-transact-sql-compatibility-level.

SQL logins

SQL-authenticated logins and their associated database users are connected via security identifier (SID), not by name. When moving a database from one instance to another, the SIDs in the SQL logins on the old instance might be different from the SIDs in the SQL logins on the new instance, even if their names match. After migration to the new instance, SQL-authenticated logins will be unable to access databases where their database users have become “orphaned,” and you must repair this. This does not affect Windows Authenticated logins for domain accounts.

This condition must be repaired before applications and end users will be able to access the database in its new location. Refer to the section “Solving orphaned SIDs” in Chapter 6.

The database owner should be included in the security objects that should be accounted for on the new server. Ensure that the owner of the database, listed either in the Database Properties dialog box or the sys.databases owner_sid field, is still a valid principal on the new instance.

For databases with Partial Containment, contained logins for each type will be restored or attached along with the database, and this should not be a concern.

Database-scoped configurations

Database-scoped configurations were introduced in SQL Server 2016 (and also in Azure SQL Database v12) and represent a container for a set of options available to be configured at the database level. in earlier versions, these settings were available only at the server or individual query, such as Max Degree of Parallelism (MaxDOP).

arrow.jpg For more information on Parallelism and MaxDOP, go to Chapter 9.

You should evaluate these options for each database after it is copied to a new instance to determine whether the settings are appropriate. The desired MaxDOP, for example, could change if the number of logical processors differs from the system default.

You can view each of these database-scoped configurations in SQL Server Management Studio. In Object Explorer, right-click a database, and then, on the shortcut menu, click Properties. In the pane on the left, click Options. A heading just for database-scoped configurations appears at the top of the Other Options list. You can also view database-scoped configurations in the dynamic management view sys.database_scoped_configurations.

Database configuration settings

You should review database-specific settings at the time of migration, as well. You can review them with a quick glance of the sys.databases catalog view, or from the database properties window in SQL Server Management Studio.

The following is not a list of all database settings, but we will cover these and many more later in the chapter. You should pay attention to these when restoring, deploying, or attaching a database to a new instance.

  • Read Only. If the database was put in READ_ONLY mode before the migration to prevent data movement, be sure to change this setting back to READ_WRITE.

  • Recovery Model. Different servers might have different backup and recovery methods. In a typical environment, the FULL recovery model is appropriate for production environments when the data loss tolerance of the database is smaller than the frequency of full backups, or when point-in-time recovery is appropriate. If you are copying a database from a production environment to a development environment, it is likely you will want to change the recovery model from FULL to SIMPLE. If you are copying a database from a testing environment to a production environment for the first time, it is likely you will want to change the recovery model from SIMPLE to FULL.

arrow.jpg For more information about database backups and the appropriate recovery model, see Chapter 11.

  • Page Verify Option. For all databases, this setting should be CHECKSUM. The legacy TORN_PAGE option is a sign that this database has been moved over the years up from a pre-SQL 2005 version, but this setting has never changed. Since SQL 2005, CHECKSUM has the superior and default setting, but it requires an administrator to manually change.

  • Trustworthy. This setting is not moved along with the database. If it was turned on for the previous system and was a requirement because of external assemblies, cross-database queries, and/or Service Broker, you will need to turn it on again. It is not recommended to ever turn on this setting unless it is made necessary because of an inflexible architecture requirement. It could allow for malicious activity on one database to affect other databases, even if specific permissions have not been granted. It is crucial to limit this setting and understand cross-database permission chains in a multitenant or web-hosted shared SQL Server environment.

arrow.jpg For more on object ownership, see Chapter 6.

Transparent data encryption

Transparent data encryption (TDE) settings will follow the database as it is moved from one instance to another, but the certificate and the certificate’s security method will not. For example, the server certificate created to encrypt the database key and the private key and its password are not backed up along with the database. These objects must be moved to the new instance along with the database prior to any attempt to restore or attach the database.

arrow.jpg For more information on transparent data encryption, see Chapter 7.

Moving existing databases

There are a number of strategies for moving or copying a SQL Server database from one instance to another. You should consider each as it relates to necessary changes to application connection strings, DNS, storage, and security environments. We’ll review a number of options for migration in this section.

Restoring a database backup

Restoring a backup is an easily understandable way to copy data from one instance to another. You can also carry out this method in such a way as to minimize the outage impact.

Let’s compare two different simplified migration processes. Following is a sample migration checklist using a FULL backup/restore:

arrow.jpg For more information on the types of database backups and database restores, see Chapter 11.

  1. Begin application outage.

  2. Perform a FULL backup of the database.

  3. Copy the database backup file.

  4. Restore the FULL backup.

  5. Resolve any SQL-authenticated login security issues or any other changes necessary before use.

  6. In applications and/or DNS and/or aliases, change connection strings.

  7. End application outage.

In the preceding scenario, the application outage must last the entire span of the backup, copy, and restore, which for large databases could be quite lengthy, even with native SQL Server backup compression reducing the file size.

Instead, consider the following strategy:

  1. Perform a FULL compressed backup of the database.

  2. Copy the database backup file.

  3. Restore the FULL backup WITH NORECOVERY.

  4. Begin application outage.

  5. Take a differential backup and then a log backup of the database.

  6. Copy the differential backup file and the log backup file to the new server.

  7. Restore the differential backup file WITH NORECOVERY.

  8. Restore the transaction log backup WITH RECOVERY.

  9. Resolve any SQL-authenticated login security issues or any other changes necessary before use.

  10. In applications and/or DNS and/or aliases, change the connection strings.

  11. End application outage.

In this scenario, the application outage spans only the duration of the differential and transaction log’s backup/copy/restore operation, which for large databases should be a tiny fraction of the overall size of the database. This scenario does require more preparation and scripting in advance, and it requires coordination with the usual backup system responsible for transaction log backups. By taking a manual transaction log backup, you can create a split transaction log backup chain for another system, for which you should take account.

Attaching detached database files

Detaching, copying, and attaching database files will also get the job of getting the database in place on a new instance. It is relatively straightforward to disassociate (detach) the files from the old SQL Server, copy the files to the new instance, and then attach the files to the new SQL Server. This is largely limited by the data transfer speed of copying the files. You might also consider moving the SAN drives to the new server to decrease the time spent waiting for files to copy.

Attaching copied database files can be faster than restoring a full database backup; however, it lacks the ability to minimize the outage by taking advantage of transaction log backups (see earlier).

Copying the full set of database files (remember that the database might contain many more files than just the .mdf and .ldf files, including secondary data files and FILESTREAM containers) is not faster than restoring a transaction log backup during the application outage, and it is not a true recovery method. Because database backup files can also be compressed natively by SQL Server, the data transfer duration between the Windows servers will be reduced by using the backup/restore strategy.

Moving data with BACPAC files

A BACPAC file is an open JSON-format file that contains the database schema and row data, allowing for the migration of databases, ideally, at the start of a development/migration phase and not for large databases. SQL Server Management Studio can both generate and import BACPAC files, and the Azure portal can import them when moving an on-premises SQL Server to an Azure SQL database.

Creating a database

In this section, we review the basics of database settings and configuration. As a DBA, you might not create databases from scratch regularly, but you should be familiar with all the settings and design decisions that go into their creation, including, adding database files and the tools involved.

Managing default settings

It is important to understand the role of the model database when creating new databases, regardless of the method of creation. The model database and its entire contents and configuration options are copied to any new database, even TempDB upon service restart. For this reason, never store any data (even for testing) in the model database. Similarly, do not grow the model database from its default size, because this will require all future databases to be that size or larger.

However, the location of the model database’s files is not used as a default for new databases. Instead, the default location for database files is stored at the server level. You can view these default storage locations, which should be changed and must be valid, in the Server Properties dialog box in SQL Server Management Studio, on the Database Settings page. There you will find the default locations for Data, Log, and Backup files. These locations are stored in the registry.

On this page, you’ll also see the default recovery interval setting, which is by default 0, meaning that SQL Server can manage the frequency of internal automatic CHECKPOINTs. This typically results in an internal checkpoint frequency of one minute. This is an advanced setting that can be (though rarely is) changed at each database level, though it should not be changed at the server level or database level except by experienced administrators.

Also on the Database Settings page of Server Properties you will find the default index fill factor and default backup compression setting. These are server-level defaults applied to each database, but you cannot configure them separately for each database. You can configure them independently at each index level and with each backup statement, respectively.

Among the settings inherited by new databases from the model database unless overridden at the time of creation are the following:

  • Initial data and log file size

  • Data and log file Autogrowth setting

  • Data and log file Maximum size

  • Recovery model

  • Target Recovery Time (which would override the system default recovery interval)

  • All Database-Scoped Configurations including the database-level settings for Legacy Cardinality Estimation, Max DOP, Parameter Sniffing, and Query Optimizer Fixes.

  • All the Automatic settings, including auto close, auto shrink, auto create/update statistics. (We discuss each of these later in the chapter.)

Owning the databases you create

The login that runs the CREATE DATABASE statement will become the owner of any database you create, even if the account you are using is not a member of the sysadmin group. Any principal that can create a database becomes the owner of that database, even if, for example, they have only membership to the dbcreator built-in server role.

Ideally, databases are not owned by named individual accounts. You might decide to change each database to a service account specific to that database’s dependent applications. You must do this after the database is created. You cannot change the database owner via SQL Server Management Studio; instead, you must use the ALTER AUTHORIZATION T-SQL statement.

arrow.jpg For more information on the best practices with respect to database ownership and how to change the database owner, see Chapter 6.

Creating additional database files

Every SQL Server database needs at least one data file and one log file. You can use additional data files to maximize storage performance. (We discuss physical database architecture in detail in Chapter 3.)

However, adding additional log files long term is not a wise decision. There is no performance advantage to be gained with more than one transaction log file for a database. SQL Server will not write to them randomly, but sequentially.

The only scenario in which a second transaction log file would be needed is if the first had filled up its volume. If no space can be created on the volume to allow for additional transaction log file data to be written, the database cannot accept new transactions and will refuse new application requests. In this scenario, one possible troubleshooting method is to temporarily add a second transaction log file on another volume to create the space to allow the database transactions to resume accepting transactions. The end resolution involves clearing the primary transaction log file, performing a one-time-only shrink to return it to its original size, and removing the second transaction log file.

Using SQL Server Management Studio to create a new database

You can create and configure database files, specifically their initial sizes, in SQL Server Management Studio. In Object Explorer, right-click Databases, and then, on the shortcut menu, click New Database to open the New Database dialog box.

After you have configured the new database’s settings but before you click OK, you can script the T-SQL for the CREATE DATABASE statement.

Here are a few suggestions when creating a new database:

  • Pregrow your database and log file sizes to an expected size. This avoids autogrowth events as you initially populate your database. You can speed up this process greatly by using the Perform Volume Maintenance Task permission for the SQL Server service account so that instant file initialization is possible.

arrow.jpg We covered instant file initialization earlier in this chapter.

  • Consider the SIMPLE recovery model for your database until it enters production use. Then, the FULL or BULK_LOGGED recovery models might be more appropriate.

arrow.jpg For more information database backups and the appropriate recovery model, see Chapter 11.

  • Review the logical and physical files names of your database and the locations. The default locations for the data and log files are a server-level setting but you can override them here. You also can move the files later on (we cover this later in this chapter).

  • As soon as the database is created, follow-up with your backup strategy to ensure that it is covered as appropriate with its role.

Deploying a database via SQL Server Data Tools

You can also deploy developed databases to a SQL Server instance using a Database Project in SQL Server Data Tools. For databases for which objects will be developed by your team or another team within your enterprise, SQL Server Data Tools provides a professional and mature environment for teams to develop databases, check them into source control, generate change scripts for incremental deployments, and reduce object scripting errors.

SQL Server Data Tools can generate incremental change scripts or deploy databases directly. It also has the option to drop or re-create databases for each deployment, though this is turned off by default.

You might find it easiest to create the new database by using SQL Server Management Studio and then deploy incremental changes to it with SQL Server Data Tools.

Database properties and options

In this section, we review some commonly changed and managed database settings. There are quite a few settings on the Options page in Database Properties, many involving rarely changed defaults or ANSI-standard deviations for legacy support.

You can view each of these settings in SQL Server Management Studio via Object Explorer. To do so, right-click a database, and then, on the shortcut menu, click Properties. In the Database Properties dialog box, in the pane on the left, click Options. You also can review database settings for all databases in the sys.databases catalog view.

The subsections that follow discuss the settings that you need to consider when creating and managing SQL Server databases.

Collation

Collations exist at three levels in a SQL Server instance: the database, the instance, and TempDB. The collation of the TempDB database by default matches the collation for the instance and should differ only in otherwise unavoidable circumstances. Ideally, the collations in all user databases match the collation at the instance level and for the TempDB, but there are scenarios in which and individual database might need to operate in a different collation.

Oftentimes databases differ from the server-level collation to enforce case sensitivity, but you can also enforce language usage differences (such as kana or accent sensitivity) and sort order differences at the database level.

The default collation for the server is decided at installation and is preselected for you based on the regionalization settings of the Windows Server. You can override this during installation. Some applications, such as Microsoft Dynamics GP, require a case-sensitive collation.

Whereas the server-level collation is virtually unchangeable, databases can change collation. You should change a database’s collation only before code is developed for the database or only after extensive testing of existing code.

Be aware that unmatched collations in databases could cause issues when querying across those databases, so you should try to avoid collation differences between databases that will be shared by common applications.

For example, if you write a query that includes a table in a database that’s set to the collation SQL_Latin1_General_CP1_CI_AS (which is case insensitive and accent sensitive) and a join to a table in a database that’s also set to SQL_Latin1_General_CP1_CS_AS, you will receive the following error:

Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "SQL_Latin1_General_CP1_CS_AS" in the equal to operation.

Short of changing either database to match the other, you will need to modify your code to use the COLLATE statement when referencing columns in each query, as demonstrated in the following example:

… FROM
CS_AS.sales.sales s1
INNER JOIN CI_AS.sales.sales s2
ON s1.[salestext] COLLATE SQL_Latin1_General_CP1_CI_AS = s2.[salestext]

In contained databases, collation is defined at two different levels: the database and the catalog. You cannot change the catalog collation cannot from Latin1_General_100_CI_AS_WS_KS_SC. Database metadata and variables are always in the catalog’s collation. The COLLATE DATABASE_DEFAULT syntax can also be a very useful tool if you know the collation before execution.

Recovery model

The FULL recovery model is appropriate for production environments when the data loss tolerance of the database is smaller than the frequency of full backups or when point-in-time recovery is appropriate. If you are copying a database from a production environment to a development environment, it is likely you will want to change the recovery model from FULL to SIMPLE. If you are copying a database from a testing environment to a production environment for the first time, it is likely that you will want to change the recovery model from SIMPLE to FULL.

arrow.jpg For more information on database backups and the appropriate recovery model, see Chapter 11.

Compatibility level

SQL Server provides database compatibility modes for backward compatibility to database-level features, including improvements to the query optimizer, additional fields in dynamic management objects, syntax improvements, and other database-level objects.

Compatibility mode is a database-level setting, and databases upgraded from an older version to a new version will retain a prior compatibility mode. For example, some new syntax additions in SQL Server 2016 such as the new STRING_SPLIT() or OPENJSON functions will not work when run in the context of a database in a prior compatibility mode. Other syntax improvements, such as DATEFROMPARTS() and AT TIME ZONE, will work in any database in any compatibility mode in SQL Server 2017.

SQL Server 2017 supports compatibility levels down to SQL Server 2008 (internal version 100), the same as SQL Server 2016.

Database compatibility does not require a service restart to take place, but we strongly recommend that you do not perform this during normal operating hours. Promoting the database compatibility mode should be thoroughly tested in preproduction environments. Even though syntax errors are extremely unlikely, other changes to the query optimizer engine from version to version could result in performance changes to the application that must be evaluated prior to rollout to a production system. When you do upgrade production from a prior compatibility level, you should do so during a maintenance period, not during user activity.

You should review database-specific settings at the time of migration, as well. You can review them from a quick scroll of the sys.databases catalog view or from the database properties window in SQL Server Management Studio.

The following is not a list of all database settings, but you should pay attention to these when restoring, deploying, or attaching a database to a new instance.

Containment type

Partially contained databases represent a fundamental change in the relationship between server and database. They are an architectural decision that you make when applications are intended to be portable between multiple SQL Server instances or when security should be entirely limited to the database context, not in the traditional server login/database user sense.

arrow.jpg For more information about the security implications of contained databases, see Chapter 6.

Azure SQL databases are themselves a type of contained database, able to move from host to host in the Azure platform as a service (PaaS) environment, transparent to administrators and users. You can design databases that can be moved between SQL Server instances in a similar fashion, should the application architecture call for such capability.

Changing the Containment Type from None to Partial converts the database to a partially contained database, and should not be taken lightly. We do not advise changing a database that has already been developed without the partial containment setting, because there are differences with how temporary objects behave and how collations are enforced. Some database features, including Change Data Capture, Change Tracking, replication, and some parts of Service Broker are not supported in partially contained databases. You should carefully review, while logged in as a member of the sysadmin server role or the db_owner database role, the system dynamic management view sys.dm_db_uncontained_entities for an inventory of objects that are not contained.

Autoclose

You should turn on this setting only in very specific and resource-exhausted environments. It activates the periodic closure of connections and the clearing of buffer allocations, when user requests are done. When active, it unravels the very purpose of application connection pooling; for example, rendering certain application architectures useless and increasing the number of login events. You should never turn on this settings as part of performance tuning or troubleshooting exercise of a busy environment.

Auto Create statistics

When you turn on this setting, the query optimizer automatically create statistics needed for runtime plans, even for read-only databases (statistics are stored in the tempdb for read-only databases). Some applications, such as SharePoint, handle the creation of statistics automatically: due to the dynamic nature of its tables and queries, SharePoint handles statistics creation and updates by itself. Unless an application like SharePoint insists otherwise, you should turn on this setting. You can identify autocreated statistics in the database as they will use a naming convention similar to _WA_Sys_<column_number>_<hexadecimal>.

Autocreate incremental statistics

Introduced in SQL 2014, this setting allows for the creation of statistics that take advantage of table partitioning, reducing the overhead of statistics creation. This setting has no impact on nonpartitioned tables. Because it can reduce the cost of creating and updating statistics, you should turn it on.

This will have an effect only on new statistics created after this setting is turned on. When you turn it on, you should update the statistics on tables with partitions, including the INCREMENTAL = ON parameter, as shown here:

UPDATE STATISTICS [dbo].[HoriztonalPartitionTable] [PK_HorizontalPartitionTable] WITH
RESAMPLE, INCREMENTAL = ON;

You also should update any manual scripts you have implemented to update statistics to use the ON PARTITIONS parameter when applicable. In the catalog view sys.stats, the is_incremental column will equal 1 if the statistics were created incrementally, as demonstrated here:

UPDATE STATISTICS [dbo].[HoriztonalPartitionTable] [PK_HorizontalPartitionTable] WITH
RESAMPLE ON PARTITIONS (1);

Autoshrink

You should never turn on this setting. It will automatically return any free space of more than 25 percent of the data file or transaction log. You should shrink a database only as a one-time operation to reduce file size after unplanned or unusual file growth. This setting could result in unnecessary fragmentation, overhead, and frequent rapid log autogrowth events.

Auto Update Statistics

When turned on, statistics will be updated periodically. Statistics are considered out of date by the query optimizer when a ratio of data modifications to rows in the table has been reached. The query optimizer checks for and updates the out-of-date statistic before running a query plan and therefore has some overhead, though the performance benefit of updated statistics usually outweighs this cost. This is especially true when the updated statistics resulted in a better optimization plan. Because the query optimizer updates the statistics first and then runs the plan, the update is described as synchronous.

Auto Update Statistics Asynchronously

This changes the behavior of the Auto Update Statistics by one important detail. Query runs will continue even if the query optimizer has identified an out-of-date statistics object. The statistics will be updated afterward.

Allow Snapshot Isolation

This setting allows for the use of Snapshot Isolation mode at the query level. When you turn this on, the row versioning process begins in TempDB, though this setting does little more than allow for this mechanism to be used in this database. To begin to use Snapshot Isolation mode in the database, you would need to change code; for example, to include SET TRANSACTION ISOLATION LEVEL SNAPSHOT.

arrow.jpg For much more on Snapshot Isolation and other isolation levels, see Chapter 9.

Is Read Committed Snapshot On

Turning on this setting changes the default isolation mode of the database from READ COMMITTED to READ COMMITTED SNAPSHOT. You should not turn this on during regular business hours; instead, do it during a maintenance window. Ideally, however, this setting is on and accounted for during development.

There will be an impact to the utilization of the TempDB as well as a rise in the IO_COMPLETION and WAIT_XTP_RECOVERY wait types, so you need to perform proper load testing. This setting, however, is potentially a major performance improvement and the core of enterprise-quality concurrency.

Page Verify Option

For all databases, this setting should be CHECKSUM. The legacy TORN_PAGE option is a sign that this database has been moved over the years up from a pre-SQL 2005 version, but this setting has never changed. Since SQL 2005, CHECKSUM has the superior and default setting, but it requires an administrator to manually change.

Trustworthy

It is not recommended to ever turn on this setting unless it is made necessary because of an inflexible architecture requirement. Doing so could allow for malicious activity on one database to affect other databases, even if specific permissions have not been granted. Before turning on this setting, you should understand the implications of cross-database ownership chains in a multitenant or web-hosted shared SQL Server environment.

arrow.jpg For more on object ownership, see Chapter 6.

Database Read-Only

You can set an older database, or a database intended for nonchanging archival, to READ_ONLY mode to prevent changes. Any member of the server sysadmin role or the database db_owner role can revert this to READ_WRITE, so you should not consider this setting a security measure.

Database-Scoped Configurations

First introduced in SQL Server 2016 (and also in Azure SQL Database v12), Database-Scoped Configurations are a set of options previously available only at the server or individual query, such as Max Degree of Parallelism (MaxDOP). You can now change settings easily via database options that previously were available only via trace flags at the server level.

You can view each of these Database-Scoped Configurations in SQL Server Management Studio. In Object Explorer, right-click a database, and then, on the shortcut menu, click Properties. In the Database Properties dialog box, in the pane on the left, click Options. On the Options page, a heading just for Database-Scoped Configurations appears at the top of the Other Options list.

The current database context is important for determining which database’s properties will be applied to a query that references objects in multiple databases. This means that the same query, run in two different database contents, will have different execution plans, potentially because of differences in each database’s Max DOP setting, for example.

Query Store

Introduced in SQL Server 2016, the Query Store is a built-in reporting mechanism and data warehouse for measuring and tracking cached runtime plans. Though useful, it is not on by default, and you should turn it on as soon as possible if you intend to use it to aid performance tuning and troubleshooting cached runtime plans.

arrow.jpg For more information on the Query Store, see Chapter 9.

Indirect checkpoints

If your database was created in SQL Server 2016 or 2017, your database is already configured to use indirect checkpoint, which became the default for all databases in SQL Server 2016. However, databases created on prior versions of SQL Server will continue to use the classic automatic checkpoint, which has been in place since SQL Server 7.0 and tweaked only since.

This is an advanced topic, and one that we won’t dive into too deeply, save for one configuration option that you should change on databases that have been upgraded from versions prior to SQL Server 2016.

What is a checkpoint? This is the process by which SQL Server writes to the drive both data and transaction log pages modified in memory, also known as “dirty” pages. Checkpoints can be issued manually by using the CHECKPOINT command but are issued in the background for you, so issuing CHECKPOINT is rarely necessary and is usually limited to troubleshooting.

What is automatic checkpoint? Prior to SQL Server 2016 and since SQL Server 7.0, by default all databases used automatic checkpoint. The rate with which dirty pages were committed to memory has increased with versions, as disk I/O and memory capacities of servers have increased. The goal of automatic checkpoint was to ensure that all dirty pages were managed within a goal defined in the server configuration option Recovery Interval. By default, this was 0, which meant it was automatically configured. This tended to be around 60 seconds, but was more or less unconcerned with the number of pages dirtied by transactions between checkpoints.

What is indirect checkpoint? This is a new strategy of taking care of “dirty pages” that is far more scalable and can deliver a performance difference especially on modern systems with a large amount of memory. Indirect checkpoints manage dirty pages in memory differently; instead of scanning memory, indirect checkpoints proactively gather lists of dirty pages. Indirect checkpoints then manage the list of dirty pages and continuously commit them from memory to the drive, on a pace to not exceed an upper bound of recovery time. This upper bound is defined in the database configuration option TARGET_RECOVERY_TIME. By default, in databases created in SQL Server 2016 or higher, this is 60 seconds. In databases created in SQL Server 2012 or 2014, this option was available but set to 0, which indicates that legacy automatic checkpoints are in use.

So, even though the recovery time goal hasn’t really changed, the method by which it is achieved has. Indirect checkpoints are significantly faster than automatic checkpoints, especially as servers are configured with more and more memory. You might notice an improvement in the performance of backups specifically.

You can configure a database that was created on an older version of SQL Server to use indirect checkpoints instead of automatic checkpoints with a single command. The TARGET_RECOVERY_TIME will be 0 for older databases still using automatic checkpoint. The master database will also have a TARGET_RECOVERY_TIME of 0 by default, though msdb and model will be set to 60 starting with SQL Server 2016.

Consider setting the TARGET_RECOVERY_TIME database configuration to 60 seconds to match the default for new databases created in SQL Server 2016 or higher, as shown here:

ALTER DATABASE [olddb] SET TARGET_RECOVERY_TIME = 60 SECONDS WITH NO_WAIT;

You can check this setting for each database in the TARGET_RECOVERY_TIME_IN_SECONDS column of the system view sys.databases.