# Ed Elliott's blog

## SSDT and Friends - .net meetup video

• Posted on: 10 February 2017
• By: Ed Elliott

I did a talk at the london .net meetup if you want to get an overview of what SSDT is and how to get started then I would recommend it:

https://skillsmatter.com/skillscasts/9274-londondot-net-january-meetup

This was aimed at .net developers rather than DBA's so there isn't much talk about "why you should use source control" etc as everyone in the room used source control already :)

Tags:

## Devops without management buy in?

• Posted on: 10 February 2017
• By: Ed Elliott

I was talking to someone at a meetup recently who was really keen on doing continuous deployment for their database but they had a number of issues, the main was that because management wasn't sold on the idea and the DBA's had complete control to push back on all and every idea he had - there was no way he could deploy continuously.

The accepted route for devops is management buy-in, if you do not have management buy-in then you can't do devops. I agree totally with this, I have seen how an executive can set the direction and all of a sudden hurdles that were there are now removed or being removed but what do you do when you don't have management buy-in?

In short you do as much as you can, for sql databases that means you:

- use source control
- use an IDE
- write unit tests
- run your tests on check-in / merge

You do as much as you can, in an ideal world you would check into a shared source control, your build server will checkout the code build, deploy and test it before deploying to the prod server - but if you have zero resources you can still use git to create a local repo, you can install jenkins locally and point it at your local git repo and you can write tSQLt tests plus you can do it all from the free version of SSDT. When you have done that there is nothing stopping you having a continuous delivery pipeline locally.

Is it ideal? No - does it work in practice? Yes.

If you are just one developer then you can do it all locally, if you are part of a team then find a server to host git and jenkins - the phrase "do it first then ask for forgiveness later" springs to mind

(warning: you know your environment, don't blame me for you actions) :)

Tags:

• Posted on: 30 September 2016
• By: Ed Elliott

There have been a couple of fixes in SQLCover this week, kindly submitted by John Mclusky (https://github.com/jmclusky):

Code coverage not reported correctly for CTEs at the end of a stored procedure if the 'with' is immediately preceded with a semicolon

and

DeclareTableVariableStatement statements cannot be covered, so report falsely as missing coverage

I have also changed where the releases can be downloaded from to use the github releases:

https://github.com/GoEddie/SQLCover/releases

The previous version is still available but I would recommend the latest version.

Finally I have started a seperate page to bring all the links together and will add a FAQ at some point:

https://the.agilesql.club/Projects/SQLCover

Happy testing!

## Refactoring in SQL Server Data Tools - SSDT

• Posted on: 27 September 2016
• By: Ed Elliott

In this post I will talk about the in-built refactoring support in SSDT – the language is slightly different from my normal style as originally it was going to be published else but rest assured it is written by myself

### What is refactoring?

In programming , the term ‘refactoring’ essentially means taking some code and improving it without adding features and without breaking the code. When we refactor code we ideally want to make small improvements over time, using an IDE that automates as many of the tasks as possible for us.

While SQL Server Data Tools (SSDT) cannot guarantee that we do not break any code it helps us make small improvements and, as an IDE, it offer us some refactoring abilities that do not exist in either SQL Server Management Studio (SSMS) or Notepad.

Just so we don’t get distracted by some of the third-party add-ins that give more comprehensive support for refactoring in SSDT or SSMS, this article will talk only about what is out-of-the-box with SSDT.

### What refactoring support does SSDT have?

SSDT helps us to refactor code by automating the actions of:

• Expanding wildcards
• Fully qualifying object names
• Moving objects to a different schema
• Renaming objects

Aside from this list SSDT also, of course, helps us to refactor code manually with its general editing facilities.

### Expand Wildcards

SSDT allows you to highlight a “*" from a SELECT statement and have it replace the “*" with a comma-delimited list of the column names in the table. As an example if we take these table definitions:

 CREATE SCHEMA hr GO CREATE TABLE dbo.person ( person_id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED, first_nmae VARCHAR(25) NOT NULL, --typo is on purpose! last_name VARCHAR(25) NOT NULL )

CREATE TABLE hr.departments
( id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
funny_name_ha_ha_I_am_so_cool_my_last_day_is_tomorrow VARCHAR(25) NOT NULL
)

CREATE TABLE hr.department
( person_id INT,
department_id INT
)

And the following stored procedures:

 CREATE PROCEDURE hr.get_employee ( @employee_id INT ) AS SELECT * FROM person p JOIN hr.department dep ON p.person_id = dep.person_id JOIN hr.departments deps ON dep.department_id = deps.id; GO CREATE PROCEDURE prod_test AS EXEC hr.get_employee 1480; GO

To improve this code the first thing we will do is to change the “SELECT *" in the hr.get_employee procedure to specify just those columns that we need. If we open the stored procedure in SSDT and right click the “*" we can choose ‘refactor’ and then ‘expand wildcards’:

We are then given a preview of what will change, and we can either cancel or accept this:

Now that we have the actual columns, we can remove the ones we do not need, save and check-in our changes. The procedure should look like:

 CREATE PROCEDURE hr.get_employee ( @employee_id INT ) AS SELECT [p].[first_nmae], [p].[last_name], [deps].[funny_name_ha_ha_I_am_so_cool_my_last_day_is_tomorrow] FROM person p JOIN hr.department dep ON p.person_id = dep.person_id JOIN hr.departments deps ON dep.department_id = deps.id; GO

### Fully qualify object names

We need to make sure that our SQL Code uses fully qualified object names. This is because …. In this example, the get_employee stored procedure references the person table without a schema which means that the user must have dbo as their default schema. To fix this we right click anywhere in the stored procedure and choose Refactor and then ‘Fully-qualify names’, we could also use the default shortcut of ctrl+r and then q. Again get a preview window:

If we accept the preview, then we end up with the following code:

 CREATE PROCEDURE hr.get_employee ( @employee_id INT ) AS SELECT select [p].[first_nmae], [p].[last_name], [deps].[funny_name_ha_ha_I_am_so_cool_my_last_day_is_tomorrow] FROM [dbo].person p JOIN hr.department dep ON p.person_id = dep.person_id JOIN hr.departments deps ON dep.department_id = deps.id; GO

This doesn’t seem like a massive deal as we could just have written ‘dbo.’ but if we had a more than one to update or a number of different tables then it would have saved more work for us.
The ‘fully qualify object names’ goes further than just table names, it will fill in tables in join statements and also columns where it is needed.

For example if I had the following query:

 SELECT first_nmae FROM [dbo].[person] p JOIN hr.department d ON department_id = p.person_id

When using the refactoring we are offered the chance to fully qualify first_nmae and the department_id in the join:

If we decided we did not want to apply the refactoring to one or the other we could uncheck them in the dialog and only apply the ones that we actually required.

If we apply both of the ‘refactorings’, we end up with:

SELECT [p].first_nmae
FROM [dbo].[person] p
JOIN hr.department d ON [d].department_id = p.person_id

### Move objects to a different schema

We can refactor the schema that an object belongs to. This is a three-stage process that:

• 1. Changes the schema that the object belongs to
• 2. Changes all references to the original object to specify the new schema
• 3. Adds an entry to the refactorlog.refactorlog to help the deployment process

If we want to move the person table from the dbo schema into the hr schema, we can simply right-click the table and then choose ‘refactor’ and then ‘move schema’:

If we look at the preview we can see that, as well as changing the schema on the object itself, it is also going to change the references to the table everywhere else in the database:

This refactoring also adds a new file to our SSDT project, the Refactor.refactorlog file in order to assist in a correct deployment of the change that preserves the data in the table. Inside the refactorlog is some xml:

<?xml version="1.0" encoding="utf-8"?>
<Operations Version="1.0" xmlns="http://schemas.microsoft.com/sqlserver/dac/Serialization/2012/02">
<Operation Name="Move Schema" Key="5df06a6f-936a-4111-a488-efa0c7f66576" ChangeDateTime="11/10/2015 07:10:39">
<Property Name="ElementName" Value="[dbo].[person]" />
<Property Name="ElementType" Value="SqlTable" />
<Property Name="NewSchema" Value="hr" />
<Property Name="IsNewSchemaExternal" Value="False" />
</Operation>
</Operations>
</code>

What this does is to save the fact that an object has changed from one schema to another. SSDT reads the refactorlog when generating a deployment script. Without the refactorlog, SSDT would look at the source dacpac and the target database and drop the table ‘dbo.person’, deleting all its’ data, and create a new empty ‘hr.person’ as they are different objects. Because of the refactorlog, SSDT generates a schema transfer rather than a drop / create:

SSDT stops the change happening again by recording, in the target database, that the refactor key has been run so you could create a new table called hr.people and it would not get transferred as well:

### Renaming Objects

The final type of built-in refactoring is to rename objects, this works similar to the move schema object but it allows us to rename any object such as a procedure, table, view or column. SSDT renames all the references for us:

SSDT also adds an entry into the refactorlog:

And finally generates a “sp_rename" as part of the deployment rather than a drop/create:

Renaming objects really becomes pretty simple and safe so you can go though and correct small mistakes like spelling mistakes or consistency mistakes. Without SSDT or another IDE to do it for you it is really difficult to rename objects as part of a quick refactoring session.

### Other ways SSDT helps to refactor

Aside from the in-built refactors that SSDT has it helps us to refactor because it allows us to find where we have references to an object. For example if you wanted to add a column to a table but did not know whether there were stored procedures that did a select * from the table and then did something that would be broken by adding a new table you could right click on the table name and do “Find All References":

We can also do the same thing for column names and so we can really easily get a picture of how and where objects are used before we change them.

### General advice on refactoring SQL Server databases

Refactoring SQL Server databases is really helped by using SSDT but there are two things that you can do which really give you the freedom to refactor your code as you go along:

• Use stored procedures / views rather than access tables directly
• Have a suite of unit/integration tests
• Use stored procedures / views rather than access tables directly

If you access your data from your application using the base tables it means you cannot change anything in your tables without also changing the application. It also makes it really hard to find references to your objects to know where they are used. Instead you should use stored procedures as an API to expose your data. If you do this then instead of having to manually find references you simply need to find the places where the stored procedure is called.

### Have a suite of unit/integration tests

If you have both unit and integration tests then, as well has having a tool that helps you to refactor your code, you also get the confidence of knowing that you can make changes without breaking anything else. Without these test suites it is hard to known whether you have broken that year-end critical task that is so easily forgotten about.

### Conclusion

SQL Server Data Tools have the basics of refactoring tools, but it isn’t really what one comes to expect from a SQL IDE. What about a tool to automatically insert a semi-colon after every statement, if there is none? Why is there nothing that changes the case of keywords according to SQL Conventions? One looks in vain for a way of reformatting code correctly. One could be more ambitious and ask for ways of finding variables that aren’t used, splitting tables, encapsulating code within a stored procedure, or checking for invalid objects. Fortunately, there a number of add-ins that fill the gap, and in the next article we take a look at SQL Prompt, which is the leading

## Database Deployments in Uncontrolled Environments

• Posted on: 20 September 2016
• By: Ed Elliott

The ideal is to make a change and see that change deployed to production, in a perfect world we would be told to work on something, write the code + tests, deploy to a test environment, prove it works and deploy - this is the cycle time and the faster you can get this the easier many things become.

The cycle time is easy to measure - it is the time the ticket arrives in the backlog to the time it moves to the done column, if your issue tracking system can't tell you this easily then use something else! The tickets are moved into the "Done" state when they have been deployed into production - if you do nothing but investigate and try to reduce your cycle time you will make a massive difference to your development process.

There have been a few discussions on stack overflow recently about how to manage deployments in uncontrolled environments, specifically data migrations. The questions were from an SSDT perspective, I don't think that SSDT is a great choice for these uncontrolled environments and there are some additional requirements for these uncontrolled environments that need some additional thought and care when creating release scripts (whether manually or using a tool).

### What is an uncontrolled environment?

I define it as a database that is managed by a customer, typically a vendor sells a product and it includes a database. The database is on a customer server and the customer is sysadmin and can make changes.There is a difference to databases where customers are allowed to make changes and ones where they are not allowed - but in either case you still need to take extra care, even if it is only to add additional logging and warnings to the output so any upgrade scripts help your support diagnose issues rather than displaying an error like "DBUpdate Error" - yes I have seen that with a vendor product once!

When you own and manage the deployment for your database application you can do these things because you can take for granted:

 You can Because Drop objects not in source code If it isn't in your source it does not exist Rely on scripts generated / created by your dev team If someone wants to create an object called X they can see if an object called X already exists or not Ensure each deployment happens successfully Run each deployment, run tests and verify the results Write data migration scripts using accurate data You have the data Use truncate on tables to clear data The script author knows there are not any foreign keys pointing to the table and that the data can be restored by backup rather than a transactopn

If you do not control the environment then you cannot do these things because:

 You can not Because Drop objects not in source code Who knows what the user has changed Rely on scripts generated / created by your dev team Users may have made non compatible changes, you want to create a new view called "users_blah"? Well it turns out they have a audit stored procedure called users_blah Ensure each deployment happens successfully Run each deployment, run tests and verify the results Write data migration scripts using accurate data You have the data Drop objects not in source code If it isn't in your source it does not exist Use truncate on tables to clear data The script author does not know there are not any foreign keys pointing to the table and that the data can be restored by backup

### So what can we do?

I really don't think that there is a 1-sized fits all solution here so you will need to look at your database and what changes you need to make but some randomish thoughts are:

• Compare / Merge type deployments will drop any changes the customer has made - that is bad
• If you had each version of the source, you could verify whether there have been any changes before deploying - that sounds good but potentially a support nightmare
• The migrations approach sounds better but you need to ensure that every change is actually deployed
• Adding additional logging and verification code is a must, instead of "truncate table" then "print truncating, check for things that will block this, truncate" - making sure that a un-reversable command like truncate has forced the user to backup or at least make sure the users understand that they need a back (even in 2016 this isn't guaranteed!)
• Taking simple precautions like not using "select *" and using column lists and "order by column" rather than "order by ordinal" will help you in the long run with odd issues that will be hard to disagnose!

I guess the end answer is to offer a hosted solution and move to continuous deployment in a controlled environment as that actually makes a lot of these things simpler!

## Controlling a dacpac deployment

• Posted on: 21 August 2016
• By: Ed Elliott

I have been thinking quite a lot recently (ok not that much but my thinking has changed) about how to deploy dacpac's. In particular I am talking about how to control the deployment, what tool is used to deploy the dacpac, what connection details are passed in, which of the hundreds of parameters are passed in and how.

I think that as I configured and deployed databases using dacpac's in a number of different environments my approaches are been:

In the beginning there was sqlpackage.exe

sqlpackage is great, it gives you a well documented command line and you can pass whatever arguments you need. The amount of times I have typed sqlpackage /action:script or /action:publish, the a,c,t keys on my keyboard are a little bit faded (they aren't but you get the picture!).

What I used to do was to check into my unit test ssdt projects, a directory with sqlpackage and all the dependent dll's so I could run the deploy from any build agent without having to have the DacFx or SSDT installed. This worked really well but there were a couple of downsides, firstly, the version checked in would hardly ever be updated and with an almost monthly (actual?) release cycle for ssdt builds this means you get behind pretty quickly. The other main issue is that you have to build up a lot of command line arguments so invariably I would end up with a powershell script or piece of .net code to manage that complexity. This is made even more complex when you have to consider the pain that is spaces in windows and an automated deployment process. There is an alternative to passing arguments on the command line that is to use a publish profile - I completly discounted these, I am not sure why but it was a long time ago - let's just agree that the important thing is that this approach to using sqlpackage.exe left me feeling a little uneasy in a few places.

After sqlpackage.exe there came code

The next approach was to write some code, this would either be powershell or c#. Typically powershell would be as part of a production ci/cd process - I would write a script to call the DacFx and then call the script from jenkins/vsts etc. I also found myself calling the DacFx from c# but this was typically limited to integration tests for tools that I was writing.

I liked the scripted approach because it meant that I could still pass some argumenst like the server name, user credentials etc on the command line (or encrypt them where necessary) and put a whole load of arguments in the scruipt to be shared by each environment. There were still a couple of problems, firstly as with the sqlpackage.exe approach I still had the problem that the DacFx needed to be installed and available so I would check the files into source control (or make them available etc). There was one additional problem that I did not forsee, that is when you use sqlpackage.exe you can load contributors from a sub folder called "Extensions", when you used the DacFx yourself you had to install them into program files which went against my (pretty strong) desire to be able to deploy from any build agent (windows for now, but i'm hoping!).

Then came the flip-flopping

For a while I meandered between the two approaches until the ssdt team announced that they had released a nuget package with the DacFx in and I decided that I would move over to that as it meant that I no longer had to check in the dll's into source control which in itself is a big win. I also decided to fix the extensions thing and so figured out a (slightly hacky) way to get the DacFx dll's in the nuget package to behave like sqlpackage and allow a sub-directory to be used to load dll's - I fixed that using this powershell module that wraps a .net dll (https://the.agilesql.club/blogs/Ed-Elliott/DacFxed-Nugetized-DacFx-Power...). Now I have the problem of not having to check in dll's and still being able to load contributors without having to install into program files sorted BUT I still had the problem of lots of command line args which I was sharing in powershell scripts and passing in some custom bits like server/db names etc.

The final piece of the puzzle

I had actually been using publish profiles, I normally had at least one as part of my unit test project that I would use to dpeloy my whole database locally before running tests before I did a git push (obviously during dev I would do a quick deploy rather than constant deploys). My current grand scheme is to put everything I need in publish profiles and use those to deploy dacpac's. I realise that this isn't a great revelation but I am happy with them and pleased with where I am now, who knows where I will end up!

*I know blink doesn't work :)

## tSQLt Visual Studio Test Adapter

• Posted on: 17 August 2016
• By: Ed Elliott

# tSQLt Visual Studio Test Adapter

## What is this?

This lets you use Visual Studio to run tSQLt tests easily. Visual Studio has a built in framework for finding and executing tests so that if you have tSQLt tests in an SSDT project for example, although this just requires you have the .sql files in source control and does not require ssdt - you can easily see and execute your tests via the Visual Studio Test Explorer window. It is also available for use in your favorite build server via the vstest.console.exe tool or if you have vsts then you can use the default test task.

## What does it look like?

Two screen shots, the first is the tSQLt sample tests in Visual Studio:

A couple of things to mention, firstly if you double click on the test in the Test Explorer window it will jump you to the test in the code (whoop). Secondly if you run a test and it fails you get a failure messages (it gives you a red cross) and clicking the test shows the tSQLt error message below.

This screen shot is the test running on vsts:

Oooh that looks shiny! You see the step failed because the tests failed (I hadn't actually deployed them whoopsie). You get some nice graphs and to get this all you need to do is add the adapter to the project and configure the "Test Assemblies" task.

## How do I use it?

To run tests you will need a .runsettings file and select it using "Test --> Test Settings --> Select Test Settings File" in the runsettings file you will neeed at a minimum the connection string to connect to:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<TestRunParameters>
<Parameter name="TestDatabaseConnectionString" value="server=.;initial catalog=tSQLt_Example;integrated security=sspi" />
<Parameter name="IncludePath" value="AcceleratorTests" />
</TestRunParameters>
</RunSettings>

If you run your tests and get an error with the connection string not being set make sure you actually have the runsettings file attached.

If you have a large project you really will want to limit the processing of tests to just test files so you can add a filter which tells the adapter to only parse files under paths that match a specific regex. To discover tests we need both the test procedure and also the schema that defines the test class so if you use a filter ensure that both are included.

Once you have your run settings, install the Visual Studio extension:

https://visualstudiogallery.msdn.microsoft.com/cba70255-ed97-449c-8c82-e...

If you want to run this on VSTS then you can either download the VSIX, extract all the files and check them into a folder in your solution or use nuget to add the package AgileSQLClub.tSQLtTestAdapter (aka https://www.nuget.org/packages/AgileSQLClub.tSQLtTestAdapter/0.59.0) to your solution. Nuget doesn't support SSDT projects so you will need at least one .net dll which can do nothing at all except reference this package. Once you have the test adapter in your project then configure a test task like:

The things to note here are you can either add a filter to the runsettings file or you can filter which .sql files are passed to the test adapter, you will need to make sure both the schemas and tests are passed in otherwise we can't work out what is a test and what is a stored procedure.

## An Oddity

Because of the way the test adapter framwork runs and discovers tests and the way that us sql developers like to seperate our schema's and tests into different files it means I need to add a fake test with the name of the test class if you try to run it you will get a message "Not Run" and it won't do anything but all of the individual tests will work. I tried to make this generic so you don't need SSDT to run and ironically if I had relied on dacpac's it wouldn't have been a problem!

## What else?

It isn't the full expression of what I would like to do with this, there are a couple of things I will add in the future but this is a start, these are:

• Ability to display the result sets rather than just the outcomes
• In SSDT I would like to deploy any changed objects that are referenced by a test so you make your changes, then run the test and the code is dpeloyed and then run - cool hey!

## Minimum Requirements

VS 2015, Update 3 - if you use an earlier version you will get slots of "0 test cases found" messages - if you are desperate for an older version of vs let me know and I will take a look but I am not planning on supporting older versions unless there is a good reason.

## Open Source

Of course this is open source, it will be on;

Any issues shout!

ed

## DacFxed - Powershell Nugetized DacFx wrapper

• Posted on: 2 August 2016
• By: Ed Elliott

Deploying a dacpac from powershell should be pretty easy, there is a .net api which you can use to open a dacpac, compare to a database and either create a script or get the database to look the same as the dacpac but there are a couple of problems with the approach.

## Problem 1 - Is the DacFx installed?

The first problem is whether the DacFx is installed, the chances are if Visual Studio or SQL Server have been installed then it will be there.

## Problem 2 - Where is the DacFx installed?

Once you know that the DacFx is available, where is it? Depending how you installed it, whether when you installed Visual Studio or via an MSI it will be installed in a different location. Further if you get a recent build it will be in the 130 folder, older builds the 120 or 110 folder. Which one do you have??

## Solution 1

So what I used to do to combat these first two issues is to check into my test projects the DAC folder which includes sqlpackage and just shell out to that to do the install, this works but updates to SSDT come between every 1 and every 3 months, the projects I have done this on are all on old builds of the DacFx and probably will be until something goes wrong and someone updates it. That sucks :(

## Solution 2

It was with great excitement, and I don't say that lightly, that in July the SSDT team announced that they would be maintaining a nuget package of the DacFx. This is really exciting because it means that problem 1 and 2 no longer exist, we can simply reference the nuget package and keeping up to date is pretty simple. While you recover from the excitement that is the DacFx in a nuget package I have something else to get you really excited...

## This means no install to Program Files

I know right, exciting! What this means is that even on hosted build servers (vsts for example) where we don't have admin rights we can still keep up to date with recent copies of the DacFx without having to commit the source control sin of checking in libraries.

## Problem 3 :( Deployment Contributors

If we no longer need admin rights to use the DacFx it means we no longer can rely on admin rights to deploy dacpacs - this means that deployment contributors cannot be copied to program files which is where the DacFx loads contributors from. Remember also that contributors are specifically loaded from the program files (or x86) version of SQL Sever or Visual Studio or whatever version of SSDT you have so it could be from any one of the version DAC folders i.e. 110, 120, 130, 1xx etc.

## Solution 3

There is a solution? Well yes of course otherwise I wouldn't have been writing this! DacFxed is a powershell module which:

• 1. References the DacFx nuget package so updating to the latest version is simple
• 2. Implements a hack (ooh) to allow contributors to be loaded from anywhere
• 3. Is published to the powershell gallery so to use it you just do "Install-Module -Name DacFxed -Scope User -Force"
• 4. Has a Publish-Database, New-PublishProfile and Get-DatabaseChanges CmdLets

Cool right, now a couple of things to mention. Firstly this is of course open source and available: https://github.com/GoEddie/DacFxed

Secondly, what is up with the cool name? Well I did't want to call the module DacFx as I was publishing it to the powershell gallery and hope that one day maybe the SSDT team will want to create a supported powershell module that publishes dacpac's and didn't want to steal the name. DacFxed is just DacFx with my name appended, what could be cooler than that?

Currently to use a deployment contributor you either need to copy it into the Program Files directory or use sqlpackage and put it in a sub-folder called Extensions - neither of these two options are particularly exciting. I needed a better way to be able to include a deployment contributor in some user writable folder and then load the dll's from there. I hope (there is a connect somewhere) that one day the SSDT team will give us an option when using the DacFx to say where to load contributors from - when that happens I commit here to modify this package to support their method so if you do use this then fear not, I will make sure it stops using a hack as soon as possible.

What is this so called hack?

When the DacFx tries to load deployment contributors it does a search of a number of well known directories to find the dll's, it also has a fairly unique way to determine which folder it should use when being called from visual studio - what happens is that it checks whether two folders above the folder the dll is in, there is a file called "Microsoft.VisualStudio.Data.Tools.Package.dll" - if this file exists then it searches in the folder the dll is in to find out if there are any deployment contributors to load. The interesting thing about this is that it doesn't actually load the file, just checks the existence of it - if it exists it searches itself for extensions. So if we have this structure:

Folder 1\Microsoft.VisualStudio.Data.Tools.Package.dll (this can be an empty text file)
Folder 1\Folder 2\folder 3\DacFx Dll's

When you load the DacFx dll's from this folder (to be specific "Microsoft.Data.Tools.Schema.Sql.dll") we get the ability to load contributors from user writable folders (which is the end goal for this).

Problem Solved?

Well no, it would be if it wasn't for the way .net resolved assemblies and powershell CmdLets. If our powershell module is structured like this:

WindowsPowershell\Modules\DacFxed\Cmdlet.dll
WindowsPowershell\Modules\DacFxed\Microsoft.VisualStudio.Data.Tools.Package.dll (remember this is an empty text file)
WindowsPowershell\Modules\DacFxed\bin\dll\Microsoft.Data.Tools.Schema.Sql.dll
WindowsPowershell\Modules\DacFxed\bin\dll\OtherDacFxDll's

What would happen is that out Cmdlet.dll would try to resolve the DacFx and it would not find it as .net doesn't search every sub-folder of the current directory to find dll's to load. If .net can't find the dll locally it will search horrible things like the GAC and if it finds the dll there, load it. This means our sneaky trick to trick the DacFx to load our extensions doesn't work.

Phew, I said hack :)

### This sounds dangerous

Well yes and no, yes it is a little exotic but no in that if you tell the DacFx to load a contributor if this process doesn't work for some reason the worst thing that will happen is you get a "Deployment contributor could not be loaded" error - you won't deploy to a database without contributor you weren't expecting.

So no not really dangerous, just annoying if it doesn't work. I have tests setup and a release pipeline for this that I will cover in another post that make it easy for me to ensure each update to the DacFx can be taken while this still works. If the SSDT team break this behaviour then I won't deploy and then anyone using it can update in their own time.

## How does it work?

You need a dacpac and a publish profile, if you have a dacpac and no publish profile then New-PublishProfile will create a template you can use to get started with.

Publish-Database -DacpacPath "path\to\dacpac" -PublishProfilePath "path\to\publish-profile"

or

Publish-Database -DacpacPath "path\to\dacpac" -PublishProfilePath "path\to\publish-profile" -verbose

or

Publish-Database -DacpacPath "path\to\dacpac" -PublishProfilePath "path\to\publish-profile" -verbose -DacFxExtensionsPath "c:\path\one;c:\path-two"

Anyway, enjoy!

## SQLCover v 0.2 - Bug fixes and Azure V12 Support

• Posted on: 5 May 2016
• By: Ed Elliott

I have released a new version of SQLCover which is a code coverage tool for T-SQL (let's you identify where you need to focus when writing tests).

This includes a few minor fixes but also support for SQL Azure so if you run your test in a v12 database or higher you can now get an idea of code coverage from that.

If you are interested in using this but don't know where to start, there is a powershell script in the download (https://the.agilesql.club/SQLCover/download.php) and if you also get reportgenerator (https://github.com/danielpalme/ReportGenerator/releases/tag/v2.4.5.0):

Once you have downloaded SQLCover, extract the files and right click "SQLCover.dll" go to properties and click "Unblock"

Then in powershell run:

. .\SQLCover.ps1
$result = Get-CoverTSql "path\to\SQLCover.dll" "connection string" "database name" "query" $outputFolder = "C:\some\path"
mkdir $outputFolder Export-OpenXml$result "$outputFolder" Start-ReportGenerator "$outputFolder" "c:\path\to\ReportGenerator.exe"

Change the path to SQLCover, the connection string, database name, query (tSQLt.RunAll), output path and path to report generator (phew) then run it and it should create an "out" directory under the output folder - open index.html and see the awesomeness of the reportgenerator output:

If you want to run mstest tests or nunit or something else completely then have a look in the SQLCover.ps1 which includes some examples at the bottom but Get-CoverExe is probably your friend.

ed

• Posted on: 5 May 2016
• By: Ed Elliott

There seems to be two trains of thought and I think this is mainly down to who and where your developers are. The first is that a stored procedure or a function is a great place to put all the business logic that an application needs. The second is that no logic should be in code deployed to SQL Server at all.

You can see these two extremes if you compare the difference between what stack overflow does (.Net/SQL Server BUT highly optimized for performance and deployments) where they have no stored procedures and embed T-SQL directly in their code to the many many thousand line procedures that we see on a daily basis, this quora answer from Kevin Kline I think is extreme but not unexpected:

https://www.quora.com/How-many-lines-have-the-longest-MSSQL-script-you-h...

The largest I ever personally wrote was a 16kb text file, which was many dozens of printed pages, and much personal misery.

Personal misery for anyone tasked with fixing that is exactly right, he goes on to say:

Otoh, I was once at the Microsoft campus in Redmund in a customer lab (what would later become the SQLCAT group) where I was shown a berserk 3rd party report writing tool which would routinely create individual stored procedures over over 30mb in size. The largest I saw at that time was 38mb.

38mb stored procedure, that is 39,845,888 characters, where do you even start with that, will SSMS even open it?

If we take those as the two extremes stack overflow and 38mb procedures then somewhere between those two sit most of the stored procedures in production today.

When we look at the T-SQL development landscape today we have the ability to write using a development IDE:

• Visual Studio SSDT
• SSMS with SQL Prompt
• DataGrip

We have the ability to unit test code:

• tSQLt
• dbUnit
• ssdt tests

If we use tSQLt then we can mock tables and procedures!

We can measure code coverage (https://the.agilesql.club/blogs/Ed-Elliott/2016-04-08/SQLCover-Code-Cove...).

When we are ready to deploy we are literally spoiled for choice at ways to do deployments:

• Redgate SQL Compare
• SSDT (Yeah!)
• Flyway
• Liquibase
• blah blah blah

So we really have this ability to start to write and maintain code in SQL Server but the question is, should we?.

I personally believe there are two factors that we need to understand:

• Maintaining existing applications
• Data locality

So in reverse order there is "Data Locality" this is the idea that you should work on your data where it is located, so if you want to get a total of a specific customers transactions do you really want to pull every record back to your app and then calculate the value or do you want to run a query or a set of queries that does the work for you?

If you have come to the decision that you want to do the calculation where the data is located then do you write in-line (or bleurgh generated) code to do it or do you want to write a stored procedure? The correct answer is to write a stored procedure.

### Stackoverflow doesn't use stored procedures so we don't need to

Stackoverflow is a specific use case and they decided to use .Net so they have a specific set of problems to deal with in terms of performance. They deploy (as I understand it) 10 times a day so if they need to change a query then they can quickly and easily - how quickly can you modify code and get it to production to fix a problem causing downtime on your mission critical app written in powerbuilder 20 years ago? (I jest but you get the point)

### Why is writing a stored procedure the correct answer?

When I say stored procedure I mean, stored procedure, function, view, computed columns, etc, something that runs where the data is located.

If we look at the first factor to understand about writing business logic, "Maintaining existing applications" then we need to understand that there are many reasons why you might not be able to modify the code and you need to change the database and it boils down to the fact that:

• A compiled application does not change
• The volumne of data in a database does change

As much as I would like to, sometimes you are dealt a crappy hand with a database you have to support and if the database is used, over time as more data is added it is unavoidable that the performance behaviour will change.

If the code is available to change without re-compiling the application then you can adjust to support the different data volumes and characteristics.

### So you should put all your business logic in stored procedures?

No, personally I believe that you should only put in the code that must be run locally to where your data is. When you do put logic in your stored procedures then it should have unit tests (tSQLt really helps here).

### When else should you have business logic in your stored procedures?

You shouldn't but as I said sometimes you are dealt a bad hand and are stuck with business logic (vendor app, old unmaintained, new maintained badly architected apps) - when this happens then make sure you use the tools available to you to help maintain and make changes. Make sure that you:

• Use an IDE and refactor
• Use source control
• Write unit tests (tSQLt)
• Live the dream

### Finally

Business logic is often complex and better modelled in a language that handles that better - T-SQL is about interacting with data, how to store it, how to read it, how to change it in an efficient manor. T-SQL isn't about modelling objects (or functions) it is about data. Applications use data to do things with.

Keep your data locality in mind, think about performance but try to keep business logic out of stored procedures. Where you cannot avoid business logic in stored procedures use tools and processes to help.