Working with SSDT
Our introduction to the SSDT toolset is complete, and it’s now time to see it in action. The rest of this chapter presents a sample scenario that demonstrates, step-by-step, how to use many of the SSDT features that you’ve just learned about. Practicing along with this example will solidify your knowledge and understanding of the tool, and prepare you for using SSDT in the real world with real database projects.
Connecting with SQL Server Object Explorer
The journey will start by creating a database. You will first use SSDT to execute a prepared script that creates the database in a query window connected to a SQL Server instance. Then you will start working with SSDT directly against the connected database. This experience is similar to using previous tools, so it’s the perfect place to start. Later on, you’ll switch to working disconnected using an offline database project.
Launch Visual Studio 2010, click the View menu, and choose SQL Server Object Explorer. This displays the new SQL Server Object Explorer in a Visual Studio panel (docked to the left, by default). This new tool window is the main activity hub for the connected development experience in SSDT. From the SQL Server Object Explorer, you can easily connect to any server instance for which you have credentials. In our scenario, the localhost instance running on the development machine is a full SQL Server 2012 Developer edition installation. This instance is assuming the role of a live production database server that you can imagine is running in an on-premise datacenter. You’re now going to connect to that “production” database server.
Right-click the SQL Server node at the top of the SQL Server Object Explorer, choose Add SQL Server, and type in your machine name as the server to connect to. Although you can certainly, alternatively, type localhost instead (or even simply the single-dot shorthand syntax for localhost), we’re directing you to use the machine name instead. This is because you’ll soon learn about the new local database runtime (LocalDB) that SSDT provides for offline testing and debugging. The LocalDB instance always runs locally, whereas the production database on the other hand just happens to be running locally. Because it can be potentially confusing to see both localhost and (localdb) in the SQL Server Object Explorer, using the machine name instead of localhost makes it clear that one represents the production database while the other refers to the database used for local (offline) development and testing with SSDT. The screen snapshots for the figures in this chapter were taken on a Windows Server 2008 R2 machine named SQL2012DEV, so we’ll be using that machine name throughout the chapter when referring to the production database running on localhost. Of course, you’ll need to replace the assumed SQL2012DEV machine name with your own machine name wherever you see it mentioned.
If you have installed SQL Server to use mixed-mode authentication and you are not using Windows authentication, then you’ll also need to choose SQL Server authentication and supply your credentials at this point, before you can connect. Once connected, SQL Server Object Explorer shows the production server and lets you drill down to show all the databases running on it, as shown in Figure 1-2.
Figure 1-2 The new SQL Server Object Explorer in Visual Studio expanded to show several connected databases.
Once connected, right-click the server instance node SQL2012DEV and choose New Query. Visual Studio opens a new T-SQL code editor window, like the one shown in Figure 1-3.
Figure 1-3 A connected query window.
This environment should seem very familiar to anyone experienced with SSMS or Query Analyzer. Notice the status bar at the bottom indicating that the query window is currently connected to the SQL2012DEV instance (again, it will actually read your machine name). The toolbar at the top includes a drop-down list indicating the current default database for the instance you’re connected to. As with previous tools, this will be the master database (as shown at the top of Figure 1-3). You must still take care to change this setting (or issue an appropriate USE statement) so that you don’t inadvertently access the master database when you really mean to access your application’s database. In this exercise, you’re creating a brand new database, so it’s fine that the current database is set to master at this time.
Type the code shown in Example 1-1 into the query window (or open the script file available in the downloadable code on this book’s companion website; see the section "Code Samples" in the "Introduction" for details on how to download the sample code). You might next be inclined to press F5 to execute the script, but that won’t work. With SSDT in Visual Studio, pressing F5 builds and deploys a SQL Server Database Project to a debug instance (you’ll be creating such a project later on, but you don’t have one yet). This is very different to the way F5 is used in SSMS or Query Analyzer to immediately execute the current script (or currently selected script).
SSDT uses a different keyboard shortcut for this purpose. In fact, there are two keyboard shortcuts (with corresponding toolbar buttons and right-click context menu items); one to execute without a debugger (Ctrl+Shift+E) and one to execute using an attached debugger (Alt+F5). You’ll practice debugging later on, so for now just press Ctrl+Shift+E to immediately execute the script and create the database (you can also click the Execute Query button in the toolbar, or right-click anywhere within the code window and choose Execute from the context menu).
LISTING 1-1. T-SQL script for creating the SampleDb database
CREATE DATABASE SampleDb GO USE SampleDb GO -- Create the customer and order tables CREATE TABLE Customer( CustomerId bigint NOT NULL PRIMARY KEY, FirstName varchar(50) NOT NULL, LastName varchar(50) NOT NULL, CustomerRanking varchar(50) NULL) CREATE TABLE OrderHeader( OrderHeaderId bigint NOT NULL, CustomerId bigint NOT NULL, OrderTotal money NOT NULL) -- Create the relationship ALTER TABLE OrderHeader ADD CONSTRAINT FK_OrderHeader_Customer FOREIGN KEY(CustomerId) REFERENCES Customer(CustomerId) -- Add a few customers INSERT INTO Customer (CustomerId, FirstName, LastName, CustomerRanking) VALUES (1, 'Lukas', 'Keller', NULL), (2, 'Jeff', 'Hay', 'Good'), (3, 'Keith', 'Harris', 'so-so'), (4, 'Simon', 'Pearson', 'A+'), (5, 'Matt', 'Hink', 'Stellar'), (6, 'April', 'Reagan', '') -- Add a few orders INSERT INTO OrderHeader(OrderHeaderId, CustomerId, OrderTotal) VALUES (1, 2, 28.50), (2, 2, 169.00), -- Jeff's orders (3, 3, 12.99), -- Keith's orders (4, 4, 785.75), (5, 4, 250.00), -- Simon's orders (6, 5, 6100.00), (7, 5, 4250.00), -- Matt's orders (8, 6, 18.50), (9, 6, 10.00), (10, 6, 18.00) -- April's orders GO -- Create a handy view summarizing customer orders CREATE VIEW vwCustomerOrderSummary WITH SCHEMABINDING AS SELECT c.CustomerID, c.FirstName, c.LastName, c.CustomerRanking, ISNULL(SUM(oh.OrderTotal), 0) AS OrderTotal FROM dbo.Customer AS c LEFT OUTER JOIN dbo.OrderHeader AS oh ON c.CustomerID = oh.CustomerID GROUP BY c.CustomerID, c.FirstName, c.LastName, c.CustomerRanking GO
This is a very simple script that we’ll discuss in a moment. But first, notice what just happened. SSDT executed the script directly against a connected SQL Server instance, and then split the code window horizontally to display the resulting server messages in the bottom pane. The green icon labeled Query Executed Successfully in the lower-left corner offers assurance that all went well with the script execution. Because of the two multi-row INSERT statements used to create sample customers and order data, you can see the two “rows affected” messages in the bottom Message pane, as shown in Figure 1-4. Overall, the experience thus far is very similar to previous tools, ensuring a smooth transition to SSDT for developers already familiar with the older tooling.
Figure 1-4 The query window after successfully executing a T-SQL script.
This simple script created a database named SampleDb, with the two tables Customer and OrderHeader. It also defined a foreign key on the CustomerId column in both tables, which establishes the parent-child (one-to-many) relationship between them. It then added a few customer and related order rows into their respective tables. Finally, it created a view summarizing each customer’s orders by aggregating all their order totals.
Now run two queries to view some data. At the bottom of the code window, type the following two SELECT statements:
SELECT * FROM Customer SELECT * FROM vwCustomerOrderSummary
Notice the IntelliSense as you type. After you finish typing, hover the cursor over Customer, and then again over vwCustomerOrderSummary. Visual Studio displays tooltips describing those objects respectively as a table and a view. Now hover the cursor over the star symbol in each SELECT statement. Visual Studio displays a tooltip listing all the fields represented by the star symbol in each query. This functionality is provided by the SSDT T-SQL language services running in the background that continuously query the database model backed by the connected SampleDb database.
Now select just the two SELECT statements (leave the entire script above them unselected) and press Ctrl+Shift+E. The result is similar to pressing F5 in SSMS or Query Analyzer: only the selected text is executed (which is what you’d expect). SSDT runs both queries and displays their results, as shown in Figure 1-5.
You don’t need the query window any longer, so go ahead and close it now (you also don’t need to save the script). Right-click the Databases node in SQL Server Object Explorer and choose Refresh. You’ll see the new SampleDb database node appear. Expand it to drill down into the database. As Figure 1-6 shows, the environment is similar to the Object Explorer in SSMS, and lets you carry out most (but not all) of the developer-oriented tasks that SSMS lets you perform.
Figure 1-5 Viewing the results of selected statements executed in the query window.
Figure 1-6 The SampleDb database in SQL Server Object Explorer expanded to show several of its objects.
The database is now up and running on SQL2012DEV. Everything is perfect—until that email from marketing arrives. Their team has just put together some new requirements for you, and now there’s more work to be done.
Gathering New Requirements
The new requirements pertain to the way customers are ranked in the Customer table. Originally, the marketing team had requested adding the CustomerRanking column as a lightweight mechanism for data entry operators to rank customer performance. This ad-hoc rating was supposed to be loosely based on the customer’s total purchases across all orders, but you can see from the CustomerRanking values in Figure 1-5 that users followed no consistency whatsoever as they entered data (no surprise there). They’ve typed things like A+, so-so, and Good. And some customers have no meaningful data at all, such as empty strings, whitespace, or NULL values.
To improve the situation, the marketing team would like to retire the ad-hoc data entry column and replace it with a formalized customer ranking system that is more aligned with their original intent. In their change request email (which is naturally flagged Urgent), they have attached the spreadsheet shown in Figure 1-7 containing new reference data for various pre-defined ranking levels. They’ve scribbled something about deploying to SQL Azure as well, and then they sign off with “P.S., We need it by Friday” (and no surprise there, either).
Figure 1-7 Reference data for the new customer ranking system.
After giving the matter some thought, you organize a high-level task list. Your list itemizes the development steps you plan on taking to fulfill the marketing department’s requirements:
Remove the CustomerRanking column from the Customer table.
Create a new CustomerRanking table based on the spreadsheet in Figure 1-7, with a primary key column CustomerRankingId storing the values 1 through 5.
Add a new column CustomerRankingId to the Customer table.
Create a foreign key on the CustomerRankingId column to establish a relationship between the Customer table and the new CustomerRanking table.
Update the vwCustomerOrderSummary view to join on the new CustomerRanking table.
Create a new uspRankCustomers stored procedure to update the Customer table’s new foreign key column, based on each customer’s total order value.
Validate for SQL Azure, then deploy to the cloud.
The rest of this chapter walks through this procedure in detail, step by step. Along the way, you’ll learn to leverage many important SSDT features and will gain insight into the way the new tooling works. It’s time to get started with the first step: removing a column from a table.
Using the Table Designer (Connected)
In SQL Server Object Explorer, right-click the Customer table and choose View Designer to open the SSDT table designer, as shown in Figure 1-8.
Figure 1-8 The new SSDT table designer.
The top-left pane of this designer window lists the defined columns in a grid just as in the SSMS table designer, but the similarity ends there. A very different mechanism is at play with the new SSDT designer, one that should be easy to understand after all the discussion we’ve had around declarative, model-based design. The CREATE TABLE statement in the bottom T-SQL pane gives it away. Knowing that the table already exists in the database, why is this a CREATE statement? Well, that’s because this isn’t actually T-SQL code that you intend to execute against the database as-is (which would fail of course, because the table exists). Rather, it’s a T-SQL declaration of “how this table should look,” whether it exists or not—and indeed, whether it exists with a different schema or not—in the target database.
Here’s what’s actually happening. The designer is operating over a memory-resident database model inside its working environment. Because you are connected at the moment, that model is backed by the live SampleDb database. But when you switch to working offline with a SQL Server Database Project (as you will in the next section of this chapter), you’ll interact with the very same table designer over a model backed by a project instead of a real database. A model can also be backed by a database snapshot. Because the table designer just operates over a model, the same tool works consistently in any of these scenarios.
You want to remove the CustomerRanking column, and that can be done either by deleting it from the grid in the top pane or editing it out of the declarative T-SQL code in the bottom pane. Both panes are merely views into the same table, so any changes appear bidirectionally. Throughout this exercise, you’ll experiment with different editing techniques in the table designer, starting with the quickest method. Just right-click CustomerRanking in the top grid and choose Delete. The column is removed from the grid and, as you’d expect, the T-SQL code is updated to reflect the change.
That was a pretty easy change. Applying that change to the database should be easy, too. Go ahead and click the Update button on the toolbar. Unfortunately, instead of just working as you’d like, you receive the following error message:
Update cannot proceed due to validation errors. Please correct the following errors and try again. SQL71501 :: View: [dbo].[vwCustomerOrderSummary] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Customer].[c]::[CustomerRanking], [dbo].[Customer]. [CustomerRanking] or [dbo].[OrderHeader].[c]::[CustomerRanking]. SQL71501 :: View: [dbo].[vwCustomerOrderSummary] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Customer].[c]::[CustomerRanking], [dbo].[Customer]. [CustomerRanking] or [dbo].[OrderHeader].[c]::[CustomerRanking]. SQL71558 :: The object reference [dbo].[Customer].[CustomerID] differs only by case from the object definition [dbo].[Customer].[CustomerId]. SQL71558 :: The object reference [dbo].[OrderHeader].[CustomerID] differs only by case from the object definition [dbo].[OrderHeader].[CustomerId].
What went wrong? Referring back to Example 1-1, notice that the view definition for vwCustomerOrderSummary specifies the WITH SCHEMABINDING clause. This means that the columns of the view are bound to the underlying tables exposed by the view, which protects you from “breaking” the view with schema changes—as you’ve done just now. The problem, as reported by the first two errors, is that the schema-bound view’s CustomerRanking column has suddenly been removed from the Customer table that underlies the view. The second two errors are actually only case-sensitivity warnings that, on their own, would not prevent the update from succeeding. We will explain these case-sensitivity warnings a bit later; for now, remain focused on the dependency issue that’s blocking the update.
The interesting thing worth noting at this point is that SSDT caught the condition before even attempting to apply your changes to the live database (which would certainly have thrown an error). In fact, you could have been aware of this issue even before clicking Update if you had previously opened the Error List pane, because SSDT constantly validates changes to the model in the background while you edit it in the designer.
Click the Cancel button to dismiss the error window. Then click the View menu and choose Error List to open the pane. Notice how the errors and warnings appear, just like compilation errors appear for C# and VB .NET projects. And just like those project types, you can double-click items in the Error List and instantly navigate to the offending code to deal with the errors. In this case, both dependency errors are in vwCustomerOrderSummary, so double-click either one now to open a code editor into the view, as shown in Figure 1-9.
Figure 1-9 . Detecting and navigating validation errors.
You want to revise this view to join against the new CustomerRanking table, but that’s not coming up until step 4 in your task list. So for now, just perform the minimum edits necessary to clear the validation errors (which are identified by red squigglies like you’ve seen in other Visual Studio code windows) so you can update the database and move on. Delete (by commenting out) the two references to c.CustomerRanking column from the view (one is in the column list, the other in the GROUP BY clause). Notice how the errors disappear from the Error List pane as you correct the code. You’re now beginning to experience the effects of model-based development with SSDT in Visual Studio.
With a clear Error List, you know that all your changes are valid. You have altered a table and a view, but those changes have been made only to the memory-resident model. The changed objects are currently open in separate Visual Studio windows, and both windows have an Update button. Yet it makes no difference which of the two buttons you click—in either case, Update means that all changes that have been buffered get sent to the database. So whichever Update button you click, your edits to both the table and the view are going to get applied to the database at once.
How is that going to happen? The edits were simple enough, but the T-SQL change script needed to apply those edits is actually a bit more complex. And therein lay the beauty of this new tooling—all of that scripting complexity is handled for you by SSDT. The tool compares the edited model with a brand-new model based on the live database, and then works out the change script automatically. Creating a fresh model from the database at this time makes sure you’re working with its latest state, in case it drifted because it was modeled for the current editing session. Then it runs an internal schema compare operation between the edited model (the source) and the latest model based on the live database (the target) to identify all their differences. Finally, SSDT generates a change script that can be executed on the live database to apply the changes. Click Update now to generate the change script.
Before running the change script, SSDT displays an informative report of all the actions that the change script is going to take. Click Update now to display the Preview Database Updates window, as shown in Figure 1-10.
Figure 1-10. The Preview Database Updates window.
You should always scrutinize this preview to make sure it’s consistent with the actions and results you would expect of the edits you’ve made. In this case, you’re being warned about data loss in the Customer table by dropping the CustomerRanking column. You’re also being told that the script will drop and then re-create the schema binding of the vwCustomerOrderSummary view, before and after the table is altered. All of this is expected. Now you can click Update Database to immediately execute the change script, or you can click Generate Script to load the change script into a code editor so you can view, possibly modify, and choose to either execute it or not.
In most cases, you’ll feel comfortable just clicking Update Database, particularly if you’ve reviewed the warnings and actions reported by the database update preview. Doing so will immediately execute the change script to update the live database. But being that this is your very first update, click Generate Script instead so you can examine the script before you run it. The script is shown in Example 1-2 (to conserve space, error-checking code has been commented out).
LISTING 1-2. The change script for the altered table and view automatically generated by SSDT.
/* Deployment script for SampleDb */ // ... :setvar DatabaseName "SampleDb" GO // ... USE [$(DatabaseName)]; GO // ... BEGIN TRANSACTION GO PRINT N'Removing schema binding from [dbo].[vwCustomerOrderSummary]...'; GO ALTER VIEW [dbo].[vwCustomerOrderSummary] AS SELECT c.CustomerID, c.FirstName, c.LastName, c.CustomerRanking, ISNULL(SUM(oh.OrderTotal), 0) AS OrderTotal FROM dbo.Customer AS c LEFT OUTER JOIN dbo.OrderHeader AS oh ON c.CustomerID = oh.CustomerID GROUP BY c.CustomerID, c.FirstName, c.LastName, c.CustomerRanking; // ... GO PRINT N'Altering [dbo].[Customer]...'; GO ALTER TABLE [dbo].[Customer] DROP COLUMN [CustomerRanking]; GO // ... PRINT N'Adding schema binding to [dbo].[vwCustomerOrderSummary]...'; GO -- Create a handy view summarizing customer orders ALTER VIEW vwCustomerOrderSummary WITH SCHEMABINDING AS SELECT c.CustomerID, c.FirstName, c.LastName, ISNULL(SUM(oh.OrderTotal), 0) AS OrderTotal FROM dbo.Customer AS c LEFT OUTER JOIN dbo.OrderHeader AS oh ON c.CustomerID = oh.CustomerID GROUP BY c.CustomerID, c.FirstName, c.LastName GO // ... IF @@TRANCOUNT>0 BEGIN PRINT N'The transacted portion of the database update succeeded.' COMMIT TRANSACTION END ELSE PRINT N'The transacted portion of the database update failed.' GO DROP TABLE #tmpErrors GO PRINT N'Update complete.' GO
It’s great that you didn’t have to write the change script, but it’s still important that you understand the change script. Let’s look it over quickly now.
Using variable substitution, the script first issues a USE statement that sets SampleDb as the current database and then it begins a transaction. The transaction will get committed only if the entire change script completes successfully. Then the script issues an ALTER VIEW statement that removes the schema binding from vwCustomerOrderSummary without yet changing its definition. So it still contains those two references to the CustomerRanking column that’s about to get dropped from the Customer table, but that will not present a problem because WITH SCHEMABINDING has been removed from the view. Next, the script issues the ALTER TABLE statement that actually drops the column from the table. After the column is dropped, another ALTER VIEW statement is issued on vwCustomerOrderSummary with the new version that no longer references the dropped column and is once again schemabound. Finally, the transaction is committed and the update is complete.
Press Ctrl+Shift+E. The script is executed and output describing actions taken by the script are displayed in the Messages pane:
Removing schema binding from [dbo].[vwCustomerOrderSummary]... Altering [dbo].[Customer]... Adding schema binding to [dbo].[vwCustomerOrderSummary]... The transacted portion of the database update succeeded. Update complete.
You can close all open windows now. Visual Studio will prompt to save changes, but it’s not necessary to do so because the database has just been updated. Right-click on the database and choose Refresh, and then drill down into SampleDb in SQL Server Object Explorer to confirm that the table and view changes have been applied. You will see that the CustomerRanking column has been removed from the database, and that completes step 1.
Working Offline with a SQL Server Database Project
With SQL Server Database Projects, you can develop databases with no connection whatsoever to a SQL Server instance. A SQL Server Database Project is a project that contains individual, declarative, T-SQL source code files. These source files collectively define the complete structure of a database. Because the database definition is maintained this way inside a Visual Studio project, it can be preserved and protected with source code control (SCC) just like the artifacts in all your other Visual Studio project types. SSDT generates a model from the project structure behind the scenes, just like it generates a model from the live database when working connected. This lets you use the same SSDT tools whether working offline or connected.
You carried out your first task online while connected directly to a live database. Now you’ll create a SQL Server Database Project for the database so that you can continue your work offline. Although (as you’ve seen) it’s easy to use SSDT for connected development, you should ideally develop your databases offline with SQL Server Database Projects, and publish to live servers whenever you’re ready to deploy your changes. By doing so, you will derive the benefits of source control, snapshot versioning, and integration with the rest of your application’s code through the Visual Studio solution and project system.
There are several ways to create a SQL Server Database Project. You can start with an empty project, design a database structure from the ground up, and then publish the entire structure to a new database on a SQL Server instance locally or in the cloud on SQL Azure. Or, as in this scenario, you have an existing database on a local SQL Server instance from which you want to generate a SQL Server Database Project. And you want this project populated with all the declarative T-SQL source files that completely define the existing database structure.
It’s easy to do this with the Import Database dialog. Right-click the SampleDb database under the SQL2012DEV instance in SQL Server Object Explorer and choose Create New Project to display the Import Database dialog, as shown in Figure 1-11.
Figure 1-11. Creating a SQL Server Database Project from an existing database.
The source database connection confirms that your new project will be generated from the SampleDb database on SQL2012DEV. Change the target project name from Database1 to SampleDb (and set the location, too, if you wish). Check the Create New Solution checkbox, and if you have an SCC provider for Visual Studio, check Add To Source Control as well. Then click Start.
If you checked Add To Source Control, you will be prompted at this point to supply credentials and server information (this will depend on your default SCC provider in Visual Studio). It takes a few moments for Visual Studio to scour the database, discover its schema, and generate the declarative T-SQL source files for the new project. When done, Visual Studio displays a window with information describing all the actions it took to create the project. Click Finish to close this window. The new project is then opened in the Solution Explorer automatically (docked to the right, by default). The dbo folder represents the dbo schema in the database. Expand it, and then expand the Tables and Views folders, as shown in Figure 1-12.
Figure 1-12. The source-controlled SQL Server Database Project after importing a database.
SSDT set up your project this way because the Folder Structure setting in the Import Database dialog (Figure 1-11) was set to Schema\Object Type. This tells SSDT to create a folder for each schema, and then a folder for each object type (table, view, and so on) contained in that schema. You are free to create additional folders as you extend your project. Unless you have a very specific or unique convention, it is best practice to maintain a consistent project structure based on the schema organization in the database like we’ve shown here.
Taking a Snapshot
Before you make any offline database changes, take a snapshot. This will create a single-file image of the current database schema that you can refer to or revert to at any time in the future. You’ll take another snapshot when you’ve completed all your database changes, and thereby preserve the two points in time—just before, and just after, the changes are made. And because they are maintained as part of the project, snapshot files are also protected under source control.
Right-click the SampleDb project in Solution Explorer and choose Snapshot Project. After validating the project, Visual Studio creates a new Snapshots folder and, inside the Snapshots folder, it creates a new .dacpac file with a filename based on the current date and time. You’ll usually want to give the snapshot a better name, so rename the file to Version1Baseline.dacpac.
Using the Table Designer (Offline Database Project)
With your “baseline” snapshot saved, you’re ready to create the new CustomerRanking table. Recall that this is the new reference table based on the spreadsheet in Figure 1-7. In Solution Explorer, right-click the project’s Tables folder (under the dbo schema folder) and choose Add | Table. Name the table CustomerRanking and click Add.
A new offline table designer window opens. You’ll find that it looks and feels exactly the same as the one in Figure 1-8 that you used when working online. That’s because it is the same tool, only this time it’s the designer over a model backed by a source-controlled project file (CustomerRanking.sql) rather than a model backed by a live table. Because there’s no connected database, the table designer has no Update button—instead, when working offline, schema changes are saved to the project script file for that table. This in turn updates the model, and then the same validation checks and IntelliSense you experienced when working while connected are run against the project. So you can find out right away if and when you make a breaking change, before deploying to a real database.
Earlier, when you removed the CustomerRanking column from the Customer table, we mentioned that you can design the table using either the grid in the top pane or the T-SQL code window in the bottom pane. You can also view and change parts of the schema definition from the Properties grid. We’ll demonstrate all of these techniques now as you lay out the schema of the new CustomerRanking table.
SSDT starts you off with a new table that has one column in it: an int primary key named Id. To rename this column to CustomerRankingId, select the column name Id in the top pane’s grid, replace it with CustomerRankingId, and press Enter. Beneath it, add the RankName column, set its data type to varchar(20), and uncheck Allow Nulls. You can see that SSDT updates the T-SQL code in the bottom pane with a CREATE TABLE statement that reflects the changes made in the top pane.
Add the third column by editing the T-SQL code in the bottom pane. Append a comma after the second column and type [Description] VARCHAR(200) NOT NULL. As expected, the grid in the top pane is updated to show the new Description column you just added in code.
Finally, tweak the data type using the Properties grid. Click the Description column in the top pane and scroll to the Length property in the Properties grid (to display the Properties grid if it’s not currently visible, click View and choose Properties Window). Click the drop-down list and select MAX to change the data type from varchar(200) to varchar(max). When you’re done, the table designer should look similar to Figure 1-13.
Figure 1-13 The table designer for the new CustomerRanking table in an offline SQL Server Database Project.
Save CustomerRanking.sql and close the table designer. This completes step 2. You are now ready to add the foreign key column to the Customer table (step 3) that joins to this new CustomerRanking table. Double-click Customer.sql in Solution Explorer to open the table designer for the Customer table. Use any technique you’d like to add a new nullable int column named CustomerRankingId (it must be nullable at this point, because it doesn’t have any data yet).
Now you can establish the foreign key relationship to the CustomerRanking table (step 4). In the upper-righthand corner of the Table Designer is a Context View area that summarizes other pertinent objects related to the table. In the Context View, right-click Foreign Keys and choose Add New Foreign Key. Name the new foreign key FK_Customer_CustomerRanking (it is best practice to assign foreign key names that indicate which tables participate in the relationship). Then edit the FOREIGN KEY template code added to the T-SQL code window in the bottom pane to be FOREIGN KEY (CustomerRankingID) REFERENCES CustomerRanking(CustomerRankingId). The table designer window should now look similar to Figure 1-14 After reviewing the table schema, save the Customer.sql file and close the designer.
Figure 1-14 The table designer for the Customer table after creating the foreign key on CustomerRankingId.
Your next tasks involve altering a view (step 5) and creating a stored procedure (step 6). It will be very helpful to have a test SQL Server environment available as you implement these steps. You don’t want to use SQL2012DEV, because that’s the “live” server. You need another SQL Server that can be used just for testing offline.
LocalDB gives you that test environment. This is a new, lightweight, single-user instance of SQL Server that spins up on demand when you build your project. This is extremely handy when working offline and there is no other development server available to you. The official name for this new variant of SQL Server is “SQL Express LocalDB,” which can be misleading because it is distinct from the Express edition of SQL Server. To avoid confusion, we refer to it simply as “LocalDB.”
Press F5 to build the project. This first validates the entire database structure defined by the project and then deploys it to LocalDB. Note, however, that this is just the default behavior; you can change the project properties to target another available server for testing if you require features not supported by LocalDB (for example, FILESTREAM, which we cover in Chapter 8).
The deployment is carried out by performing a schema compare between the project and LocalDB on the target server. More precisely, and as already explained, models of the source project and target database are generated, and the schema compare actually works on the two models. Being your very first build, the database does not exist yet on the target server, so the schema compare generates a script that creates the whole database from scratch. As you modify the project, subsequent builds will generate incremental change scripts that specify just the actions needed to bring the target database back in sync with the project.
Look back over in SQL Server Object Explorer and you’ll see that SSDT has started a new LocalDB instance. The host name is (localdb)\SampleDb, and it is a completely separate instance than the SQL2012DEV instance (which has not yet been updated with the new CustomerRanking table and the CustomerRankingId foreign key in the Customer table). Figure 1-15 shows SampleDb deployed to LocalDB, expanded to reveal its tables. Notice that it does include the new CustomerRanking table.
Figure 1-15 The local database runtime (LocalDB) after building and deploying the SQL Server Database Project.
Now you have a test database to play around with, but of course there’s no data in it. You will add some now so that you can test the view you’re about to alter and the stored procedure you’re about to create. Using simple copy/paste, SSDT lets you import small sets of rows from any “table” source (including Microsoft Word and Excel) into a database table that has compatible columns.
First, bring in the reference data from the ranking definitions provided by the spreadsheet in Figure 1-7. You can easily grab the data straight out of the spreadsheet and dump it right into the new CustomerRanking table. Open the spreadsheet in Excel, select the five rows of data (complete rows, not cells or columns), then right-click the selection and choose Copy. Back in SQL Server Object Explorer, right-click the CustomerRanking table and choose View Data. The Editable Data Grid in SSDT opens with a template for inserting a new row. Right-click the row selector for the new row template and choose Paste (be sure to right-click the row selector in the left gray margin area, and not a cell, before pasting). As shown in Figure 1-16, SSDT faithfully copies the data from Excel into the CustomerRanking table.
Figure 1-16 Reference data imported into a database table from Excel via the clipboard.
You also need a few customers to work with. Using the same copy/paste technique, you will transfer rows from the Customer table in the production database to the test database on LocalDB (for this exercise, you won’t transfer related order data in the OrderHeader table). There are only a handful of customers, so you’ll just copy them all over. Typically though, you’d extract just the subset of data that provides a representative sampling good enough for testing purposes. Expand the production server (SQL2012DEV) node in SQL Server Object Explorer and drill down to the Customer table. Right-click the table and choose View Data. Select all the customer rows, then right-click the selection and choose Copy. Next, right-click the Customer table in the test database on (localdb)\SampleDb and choose View Data. Right-click the new row selector and choose Paste to copy in the six customer rows.
You are now at step 5, which is to update the vwCustomerOrderSummary view. Recall that this is the same view you edited back in step 1 (while connected), when you removed the schema-bound reference to the old CustomerRanking column that was being dropped from the Customer table. With the new reference table and foreign key relationship now in place, you will revise the view once again (offline, this time) to join with the CustomerRanking table on CustomerRankingId, so that it can expose the display name in the reference table’s RankName column.
In Solution Explorer, double-click vwCustomerOrderSummary.sql in the project’s Views folder (under the dbo schema folder). The view opens up in a new code window, and your attention may first be drawn to several squigglies that Visual Studio paints in the view’s code. They’re not red, because there is really nothing significantly wrong with the view, and so these are just warnings. Hover the cursor over one of them to view the warning text in a tooltip (you can also see all of them listed as warning items in the Error List pane). The warnings indicate that the view uses CustomerID (ending in a capital D) to reference a column that is actually defined as CustomerId (ending in a lowercase d). These are the same case-sensitivity warnings you saw earlier when attempting to update the database with dependency issues. Object names in SQL Server are normally not case-sensitive (like VB .NET), but non-default collation settings can change that behavior so that they are case-sensitive (like C#). This would cause a problem if you deployed the view to a SQL Server instance configured for a case-sensitive collation of object names.
Add another LEFT OUTER JOIN to the view to add in the CustomerRanking table joined on the CustomerRankingId of the Customer table, and add RankName to the SELECT and GROUP BY column lists. You want your code to be squeaky-clean, so now is also a good time to resolve those case-sensitivity warnings. Replace CustomerID with CustomerId in the four places that it occurs (once in the SELECT column list, twice in the first JOIN, and once more in the GROUP BY column list). Example 1-3 shows the view definition after making the changes.
LISTING 1-3. The updated vwCustomerOrderSummary view definition joining on the new CustomerRanking table.
-- Create a handy view summarizing customer orders CREATE VIEW vwCustomerOrderSummary WITH SCHEMABINDING AS SELECT c.CustomerId, c.FirstName, c.LastName, r.RankName, ISNULL(SUM(oh.OrderTotal), 0) AS OrderTotal FROM dbo.Customer AS c LEFT OUTER JOIN dbo.OrderHeader AS oh ON c.CustomerId = oh.CustomerId LEFT OUTER JOIN dbo.CustomerRanking AS r ON c.CustomerRankingId = r.CustomerRankingId GROUP BY c.CustomerId, c.FirstName, c.LastName, r.RankName
Save the vwCustomerOrderSummary.sql file to update the offline project. You know that pressing F5 now will deploy the changed view to the test database on LocalDB. But what if you attempt to execute the script directly by pressing Ctrl+Shift+E, right here in the code window? Go ahead and try. You’ll receive this error message in response:
Msg 2714, Level 16, State 3, Procedure vwCustomerOrderSummary, Line 2 There is already an object named 'vwCustomerOrderSummary' in the database.
Here’s what happened. First, SSDT connected the query window to the (localdb)\SampleDb instance. Then it attempted to execute the script imperatively against the connected database, just as you’ve already seen with Ctrl+Shift+E. But being part of an offline project, this script is declarative and so it’s expressed as a CREATE VIEW statement. The view already exists in the database, so the error message makes perfect sense. Again, the proper way to update the database is to deploy it via an incremental deployment script by debugging with F5.
However, you are indeed connected to the test database on LocalDB, even though you’re working inside the query window of an offline project that hasn’t yet been deployed to LocalDB. This means that you can actually test the view before you deploy it. Select all the text from SELECT until the end of the script (that is, leave only the CREATE VIEW portion of the window unselected) and press Ctrl+Shift+E again. This time, you get a much better result. Only the chosen SELECT statement executes, which is perfectly valid T-SQL for the connected database. The query results show you what the view is going to return, and you got that information without having to deploy first. In this mode, you are actually working connected and offline simultaneously! You can select any T-SQL to instantly execute it, test and debug stored procedures, and even get execution plans, all while “offline.”
Refactoring the Database
The view is ready to be deployed, but now you decide to change some names first. Customers are the only thing being ranked, so shortening the table name CustomerRanking to Ranking and column names CustomerRankingId to RankingId is going to make your T-SQL more readable (which is important!). Without the proper tooling, it can be very tedious to rename objects in the database. But the refactoring capabilities provided by SSDT make this remarkably easy.
In the new LEFT OUTER JOIN you just added, right-click on the CustomerRanking table reference, and then choose Refactor, Rename. Type Ranking for the new table name and click OK. You are presented with a preview window (Figure 1-17) that will appear very familiar if you’ve ever used the refactoring features in Visual Studio with ordinary .NET projects.
Figure 1-17 Previewing changes before refactoring is applied to the database.
This dialog shows all the references to the CustomerRanking table that will be changed to Ranking when you click Apply (notice that checkboxes are provided so that you can also choose which references should get updated and which should not). Scroll through the change list to preview each one, and then click Apply to immediately invoke the rename operation. Every affected project file is updated accordingly, but Visual Studio won’t actually rename project files themselves. The project script file defining the newly renamed Ranking table is still named CustomerRanking.sql. Right-click the file in Solution Explorer, choose Rename, and change the filename to Ranking.sql.
Now rename the primary key column in the Ranking table along with its corresponding foreign key column in the Customer table, both of which are currently named CustomerRankingId. The two key columns are referenced on the same LEFT OUTER JOIN line, so this will be easy. Right-click the r.CustomerRankingId key column in the join and choose Refactor, Rename. Type RankingId for the new name, click OK, preview the changes, and click Apply to update the primary key column name in the Ranking table. Then repeat for the c.CustomerRankingId key column to update the foreign key column name in the Customer table (the actual order in which you refactor the column names in these tables is immaterial).
There’s one more thing to rename, and that’s the foreign key definition in the Customer table. This isn’t strictly necessary of course, but the (self-imposed) convention to name foreign keys definitions after the tables they join dictates that FK_Customer_CustomerRanking should be renamed to FK_Customer_Ranking. The Customer table is specified first in the view’s FROM clause, so right-click on it now and choose Go to Definition. This navigates directly to the Customer table definition in a new query window. In the CONSTRAINT clause, right-click FK_Customer_CustomerRanking and choose Refactor, Rename. Type FK_Customer_Ranking for the new name, click OK, preview the changes (just one, in this case), and click Apply.
You’re all set to deploy the changes with another build, so press F5 once again. After the build completes, click Refresh in the SQL Server Object Explorer toolbar and look at the test database running under (localdb)\SampleDb to confirm that the CustomerRanking table has been renamed to Ranking. Right-click the Ranking table and choose View Data to confirm that all the data in the renamed table is intact. When you rename objects in a SQL Server Database Project, SSDT generates a change script with corresponding EXECUTE sp_rename statements in it, as opposed to dropping one object and creating another (which, for tables, would result in irrevocable data loss). So the tool does the right thing, relying ultimately on the SQL Server sp_rename stored procedure to properly change the object’s name internally within the database.
It’s time to create the stored procedure that ranks the customers. First, create a Stored Procedures folder beneath the dbo folder in Solution Explorer (to do this, right-click the dbo folder, and choose Add | New Folder). This folder would have already been created when you imported the database into the project, had there been any stored procedures in the database at the time. Then right-click the new Stored Procedures folder and choose Add | Stored Procedure. Name the stored procedure uspRankCustomers and click Add. SSDT creates a new file named uspRankCustomers.sql and opens it in a new T-SQL editor window. Replace the template code with the script shown in Example 1-4 and save it, but keep the window open. Now press F5 to perform another build and push the new stored procedure out to the test database on LocalDB.
LISTING 1-4. The stored procedure to rank customers based on their total order amount.
CREATE PROCEDURE uspRankCustomers AS DECLARE @CustomerId int DECLARE @OrderTotal money DECLARE @RankingId int DECLARE curCustomer CURSOR FOR SELECT CustomerId, OrderTotal FROM vwCustomerOrderSummary OPEN curCustomer FETCH NEXT FROM curCustomer INTO @CustomerId, @OrderTotal WHILE @@FETCH_STATUS = 0 BEGIN IF @OrderTotal = 0 SET @RankingId = 1 ELSE IF @OrderTotal < 100 SET @RankingId = 2 ELSE IF @OrderTotal < 1000 SET @RankingId = 3 ELSE IF @OrderTotal < 10000 SET @RankingId = 4 ELSE SET @RankingId = 5 UPDATE Customer SET RankingId = @RankingId WHERE CustomerId = @CustomerId FETCH NEXT FROM curCustomer INTO @CustomerId, @OrderTotal END CLOSE curCustomer DEALLOCATE curCustomer
This stored procedure “ranks” the customers, examining them individually and assigning each a value based on their order total. It does this by opening a cursor against the order summary view, which returns one row per customer with their individual orders aggregated into a single order total. Based on the dollar value of the total, it then updates the customer with a ranking value between one and five. Then it advances to the next customer until it reaches the end of the cursor. As mentioned at the outset, this solution may be a bit contrived (and we’re sure you can think of a better approach), but it suits our demonstration purposes here just fine.
Testing and Debugging
Are you in the habit of running new or untested code on live databases? We certainly hope not. Though you could, you should not simply push all of the changes you’ve made in the project (steps 2 through 6) back to the live database on SQL2012DEV, and then run the stored procedure there for the very first time. It’s much safer to test the stored procedure offline first with LocalDB. You will now learn how to do that using the integrated debugger in Visual Studio. Then you can confidently deploy everything back to SQL2012DEV, and finally (step 7), to the cloud!
The uspRankCustomers stored procedure is still open in the code editor. Click inside the left margin on the OPEN curCustomer line to set a breakpoint just before the cursor is opened. The breakpoint appears as a red bullet in the margin where you clicked. This is exactly how breakpoints are set in C# or VB .NET code, and SSDT now delivers a similar debugging experience for T-SQL code as well. In SQL Server Object Explorer, expand the Stored Procedures node (located beneath Programmability, just as in SSMS) for SampleDb beneath the LocalDB instance. Right-click the Stored Procedures node, choose Refresh, and you will see the uspRankCustomers stored procedure you just deployed. Right-click on the stored procedure and choose Debug Procedure. SSDT generates an EXEC statement to invoke uspRankCustomers and opens it in a new query window. The debugger is already started, and is paused on the USE [SampleDb] statement above the EXEC statement.
Press F5 to continue execution. The debugger reaches the EXEC statement, enters the stored procedure, and then breaks on the OPEN curCustomer statement where you previously set the breakpoint. Now start single stepping through the stored procedure’s execution with the debugger’s F10 keystroke. Press F10 three times to step over the next three statements. This opens the cursor, fetches the first customer from it, and you are now paused on the first IF statement that tests the first customer’s order total for zero dollars.
Earlier, you copied six customer rows from the SQL2012DEV database to LocalDB, but we specifically instructed you not to copy any order data. So this loop will iterate each customer, and (based on an order total of zero dollars) assign a ranking value of 1 for every customer. Rather than interrupting your debugging session now to import some sample order data and start over, you will use the debugger’s Locals window to simulate non-zero order totals for the first two customers. Click the Debug menu, and then choose Windows | Locals.
In the Locals window, you can see that @CustomerId is 1 (this is the first customer) and @OrderTotal is 0 (expected, because there’s no sample order data). @RankingId is not yet set, but if you allow execution to continue as-is, the customer will be ranked with a 1. Double-click the 0.0000 value for @OrderTotal in the Locals window, type 5000 and press Enter. Now the stored procedure thinks that the customer actually has $5,000 in total orders. Press F10 to single step. Because @OrderTotal no longer equals zero, execution advances to the next IF condition that tests the order total for being under $100. Press F10 again and execution advances to the next IF condition that tests for under $1,000. Press F10 once more to reach the IF condition testing for under $10,000. This condition yields true (there are $5,000 in total orders), so pressing F10 to single step once more advances to the SET statement that assigns a ranking value of 4. This is the correct value for orders in the range of $1,000 to $10,000. Figure 1-18 shows the debugging session paused at this point.
Continue pressing F10 to single step through the remaining SET, UPDATE, and FETCH NEXT statements, and then back up again to the first IF statement testing the second customer’s order total value for zero dollars. Use the Locals window to fake another amount; this time change @OrderTotal to 150. Single step a few more times to make sure that this results in the stored procedure assigning a ranking value of 3 this time, which is the correct value for orders in the range of $100 to $1,000. Now press F5 to let the stored procedure finish processing the rest of the customers with no more intervention on your part.
When the stored procedure completes execution, right-click the Customer table in SQL Server Object Explorer (be sure to pick the LocalDB instance and not SQL2012DEV) and choose View Data. The table data confirms that the first customer’s ranking was set to 4, the second customer’s ranking was set to 3, and all the other customer rankings were set to 1 (if you already have a Customer table window open from before, the previous values will still be displayed; you need to click the Refresh button in the toolbar to update the display).
Figure 1-18 T-SQL debugging session of a stored procedure in Visual Studio.
This was by no means exhaustive testing, but it will suffice for demonstration purposes. The key point is that SSDT provides an environment you can use for debugging and testing as you develop your database offline, until you’re ready to deploy to a live environment (as you are now).
You are ready to deploy to the database back to the live server on SQL2012DEV. As you may have correctly surmised by now, the process is fundamentally the same as working offline with LocalDB each time F5 is pressed: SSDT runs a schema compare to generate a change script. The project properties (by default) specify a connection string that points to LocalDB. So building with F5 uses the test database as the target for the schema compare with the project as the source, and then executes the generated change script against the test database on LocalDB. This all happens as a completely unattended set of operations every time you press F5.
Now you will carry out those very same steps once again, only this time you’ll get more involved in the process. In particular, you will specify the live SQL2012DEV instance as the target for the schema compare, rather than LocalDB. You will also review the results of the schema compare, and have the chance to choose to deploy or not deploy specific detected changes. Finally, you’ll get the opportunity to view, edit, save, and execute the change script after it is generated, rather than having it execute automatically. So there’s a bit more intervention involved in the process now, but you want it that way. The schema compare process itself is the same as the F5 build—you just get to exercise more control over it to support different deployment scenarios.
Right-click the SampleDb project in Solution Explorer and choose Schema Compare to open a new schema compare window. You need to specify a source and target for any schema compare, naturally. Because you launched the window from the SQL Server Database Project context menu in Solution Explorer, Visual Studio sets the source to the project automatically, leaving you to set just the target. To set the target, click its drop-down list and choose Select Target to display the Select Target Schema dialog, shown in Figure 1-19.
Figure 1-19 SSDT lets you choose between projects, databases, and snapshots for schema compare operations.
Notice how you can choose between three schemas for the target—a project, a database, or a data-tier application file (snapshot). The same choices are also supported for the source, although the SQL Server Database Project was assumed as the source automatically in this case. Any combination of source and target source schemas is supported; SSDT simply creates source and target models from your choice of backings. Then, working off the models, it shows you the differences and generates a change script for you. This flexibility is a major benefit of model-based database development with SSDT.
The Select Target Schema dialog has correctly assumed that you want to use a database as the target. All you need to do is choose the live SampleDb database running on SQL2012DEV. Click New Connection to open a Connection Properties dialog, type your actual machine name for the server name (which is SQL2012DEV in the current example), choose SampleDb from the database name drop-down list, and click OK. (Visual Studio will remember this connection for the future, and make it available for recall in the database dropdown the next time you run a schema compare.) Click OK once more, and then click the Compare button in the toolbar to start the schema compare.
It takes just a few moments for the operation to complete. When it finishes, the schema compare displays all the changes you’ve made offline since creating the project (steps 2 through 6). The report lets you see each added, changed, and dropped object, and it can be organized by type (table, view, and so on), schema, or action (change, add, or delete). Selecting any object in the top pane presents its T-SQL declaration in the bottom pane, side-by-side (source on the left, target on the right), with synchronized scrollbars. The T-SQL is color-coded to highlight every one of the object’s differences. If desired, you can exclude specific objects from the change script (which hasn’t been generated yet) by clearing their individual checkboxes back up in the top pane.
Select the vwCustomerOrderSummary view in the top pane to see the source and target versions of the view in the bottom pane. As shown in Figure 1-20, the rich visual display rendered by the schema compare tool makes it easy to identify all the changes made to the view.
Figure 1-20 Viewing the schema differences between a SQL Server Database Project and a live database.
As with the table designer, you can choose to update the live database immediately by generating and running the change script without previewing it. Or you can choose to be more cautious, and just generate the change script. Then you can view, edit, and ultimately decide whether or not to execute it. Your confidence level should be very high by now, so just click the Update button in the toolbar (and then click Yes to confirm) to let it run. SSDT updates the target database and displays a completion message when it’s done. Click OK to dismiss the message. The differences from before the update are still displayed in the window, now dimmed in gray (you can click Compare again to confirm that there are no longer any differences between the project and the live database on SQL2012DEV). In SQL Server Object Explorer, drill down on SampleDb under SQL2012DEV (or refresh already drilled down nodes) to verify that it reflects all the work performed in the project for steps 2 through 6 on your task list.
You are almost ready to run the new uspRankCustomers stored procedure and update the live Customer table, but there’s one more thing to do before that. Although the deployment created the schema of the Ranking table, it didn’t copy its data. You need to import the reference data from the spreadsheet in Figure 1-7 again, this time into the live database on SQL2012DEV. You can certainly use the same copy/paste trick we showed earlier when you imported the spreadsheet into the test database on LocalDB, but we’ll take this opportunity now to show you how to script table data with SSDT.
Under the (localdb)\SampleDb node (the LocalDB instance) in SQL Server Object Explorer, right-click the Ranking table and choose View Data to open a window showing the five rows in the table. Next, click the Script button on the toolbar (the next to the last button). SSDT generates INSERT statements for the five rows of data in the Ranking table, and displays them in a new query window. You want to execute these INSERT statements in a query window connected to the live database on SQL2012DEV, so select all the INSERT statements and press Ctrl+C to copy them to the clipboard. Then under the SQL2012DEV node in SQL Server Object Explorer, right-click the SampleDb database and choose New Query. Press Ctrl+V to paste the INSERT statements into the new query window and then press Ctrl+Shift+E to execute them. The reference data has now been imported into the live database and you’re ready to update the customers.
In the same query window, type EXEC uspRankCustomers, select the text of the statement, and press Ctrl+Shift+E. The stored procedure executes and updates the customers. (You can ignore the null value warning; it refers to the SUM aggregate function in the view, which does not affect the result.) To see the final result, type SELECT * FROM vwCustomerOrderSummary, select it, and press Ctrl+Shift+E once again. As shown in Figure 1-21, each customer’s ranking is correctly assigned based on their total order amount.
Figure 1-21 Viewing the final results of offline development in the live database.
Publishing to SQL Azure
The marketing team’s last request was that you deploy a copy of the database to SQL Azure. To ensure that the database is cloud-ready, you just need to tell SSDT that you are targeting the SQL Azure platform by changing a property of the project. Then, if any SQL Azure compatibility issues are identified, they can be resolved before you deploy. As you might expect by now, you will use the very same techniques you’ve learned throughout this chapter to deploy the SQL Server Database Project to SQL Azure.
Right-click the SampleDb project in Solution Explorer and choose Properties. In the Project Settings tab, you’ll notice that the Target Platform is currently set to SQL Server 2012. Change it to SQL Azure as shown in Figure 1-22, press Ctrl+S to save the properties, and then close the properties window.
Figure 1-22 . Changing the target platform of a SQL Server Database Project to SQL Azure.
Now press F5 to once again build the project and deploy it to LocalDB. The build fails, and the Error List pane shows the following error:
SQL71560: Table [dbo].[OrderHeader] does not have a clustered index. Clustered indexes ar e required for inserting data in this version of SQL Server.
This error informs you that the OrderHeader table is missing a clustered index. The astute reader might have noticed back in Example 1-1 that the OrderHeaderId column in this table does not specify PRIMARY KEY (like the Customer table does on its CustomerId column), and so OrderHeader has no clustered index. This was an oversight that might not have been caught so easily because tables in on-premise editions of SQL Server do not require a clustered index. But SQL Azure databases absolutely require a clustered index on every table, so now that you’re targeting the cloud specifically, the problem is brought to your attention inside the project.
This is a quick and easy fix to make using the table designer. Back in the SQL Server Database Project (in Solution Explorer), double-click the OrderHeader.sql table (under the dbo and Tables folders) to open the project’s definition of the table in the designer. Right-click the OrderHeaderId column, choose Set Primary Key, save, and then close the table designer. The primary key definition results in the creation of a clustered index on the table. This resolves the issue, and you’ll see the error disappear from the Error List pane immediately.
Now that you know the database is cloud-compatible, you’re ready to deploy it to SQL Azure. Right-click the SQL Server Database Project in Solution Explorer and choose Publish to display the Publish Database dialog. Click Edit, enter the server and login information for your SQL Azure database, and click OK. Figure 1-23 shows the Publish Database dialog with the target connection string pointing to a SQL Azure database.
Figure 1-23 The Publish Database dialog set to deploy the project to a SQL Azure target instance.
As we’ve been noting all along, you can script the deployment without executing it by clicking Generate Script. But you’re ready to deploy to SQL Azure right now. Click Publish, and Visual Studio spins up the same familiar process. It performs a schema compare between the source SQL Server Database Project and target SQL Azure instance, and then generates and executes the resulting change script on the target. As with your very first build to LocalDB, the database does not exist yet on the target, so the change script creates the whole database in the cloud from scratch. Subsequent deployments will generate incremental change scripts that specify just the actions needed to synchronize the SQL Azure database with the project.
During the deployment process, the Data Tools Operations window in Visual Studio provides a dynamic display of what’s happening. Figure 1-24 shows the Data Tools Operations window after the publish process is complete.
Figure 1-24 The Data Tools Operations pane reports all the actions taken to deploy to SQL Azure.
A really nice feature of the Data Tools Operations pane is the ability to see the scripts that were just executed inside query windows and view their execution results. Click the various links (View Preview, View Script, and View Results) to review the deployment you just ran.
After deploying, SSDT automatically adds your SQL Azure server instance to the SQL Server Object Explorer, as shown in Figure 1-25. You can drill down on SQL Azure databases in SQL Server Object Explorer and work with them using the very same development tools and techniques that we’ve shown throughout this chapter. It’s exactly the same model-based, buffered experience you have with connected development of on-premise databases, only now it’s a SQL Azure database backing the model. Thus, SQL Server Object Explorer functions as a single access point for connected development against any SQL Server database, wherever it’s located.
You’ve used SSDT to successfully implement all the tasks to fulfill your requirements. Before concluding your work, take another snapshot. Right-click the project in Solution Explorer one last time and choose Snapshot Project. SSDT serializes the database model (based on the project’s current state) into another .dacpac file in the Snapshots folder, which you should rename to Version1Complete.dacpac.
Now your project has two snapshots, Version1Baseline.dacpac and Version1Complete.dacpac, and each represents the database structure at two different points in time. The collection will grow over time as you take new snapshots during future development, and thus your project accumulates an historical account of its database structure as it changes with each new version. And because any snapshot can serve as either the source or target model of a schema compare operation, it’s very easy to difference between any two points in time, or between any single point in time and either a live database (on-premise or SQL Azure) or an offline SQL Server Database Project.
Figure 1-25. A SQL Azure database connected in SQL Server Object Explorer.
No tool is perfect, and SSDT is no exception. Yet even as we call out those areas where the tool is lacking, we’ll still emphasize what big believers we are in this new technology, and that we greatly encourage SSDT adoption over traditional database development methods. The SSDT team has done a fantastic job with the model-based design, and there is a lot more tooling still that can be provided by leveraging the model’s rich metadata, such as database diagrams and query designers that are not yet provided. There is also no spatial viewer to graphically display spatial query results, such as the one provided in SQL Server Management Studio (we cover spatial queries and the spatial viewer in Chapter 9).
Although SSDT is intimately aware of database schema, it does not provide data-oriented functionality. So it can’t generate data or compare data in the database, nor does it support database unit testing. These are important features supported by the Visual Studio Database Professional edition (DbPro) that are still missing from SSDT. This means that, although SSDT is positioned to obsolesce DbPro, that won’t happen until it achieves parity with key components of the DbPro feature set.