Learn how to use tSQLt to professionally unit test T-SQL code. To enroll in the free self-paced course visit here

ScriptDom parsing and NoViableAltExceptions

  • Posted on: 2 March 2017
  • By: Ed Elliott

If you have ever tried to debug a program that used the TSql Script Dom to parse some T-SQL you will know that the process is extremely slow and this is due to the volume of NoViableAltExceptions (and others) that are thrown and then caught. Because these are first chance exceptions they are being handled and it is the way that the script dom interacts with Antlr and the Lexer that they use. When you debug a program what happens is you have two processes, process one is the debuger, this starts (or attaches) to process two, the debugee.

The debugger calls a windows function WaitForDebugEvent typically in a "while(true)" loop (everyone should write a windows debugger at some point in their lives you learn so much, in fact put down ssms and go write your first debugger loop: The debugee app is then run and when something interesting like an exception or a dll is loaded/unloaded the debuggee is paused (i.e. all threads stopped), then WaitForDebugEvent returns and the debugger can look at the child process and either do something or call WaitForDebugEvent again. Even if the debugger doesn't care about the exceptions the debugee is still paused and when you parse T-SQL under a debugger, even if you tell Visual Studio to ignore the exceptions, the debugee is still paused for every exception just so Visual Studio (or your home baked debugger) can decide that it wants to ignore that exception and the debugee is started up again.

What this means for an app that throws lots of first chance exceptions is a constant start, stop, start, stop which is so so painful for performance - it is basically impossible to debug a TSql Script Dom parse on a large project, I typically debug a project with like one table and one proc and hope it gives me everything I need or do other tricks like letting the parsing happen without a debugger attached then attach a debugger at the right point after the parsing has happened but then again I don't have to debug the TSql Lexer's!

So where is this leading?

I was wondering what effect these first chance exceptions had on T-SQL and even in normal operations where we don't have a debugger attached, is there something we can do to speed up the processing?

The first thing I wanted to do was to try to reproduce a NoViableAltException, I kind of thought it would take me a few goes but actually the first statement I wrote caused one:

"select 1;"

This got me curious so I tried just:

"select 1"

Guess what? no NoViableAltException the second time - this didn't look good, should we remove all the semi-colon's from our code (spoiler no!).

Ok so we have a reproducable query that causes a first chance exception, what if we parse this like 1000 times and see the times and then another 1000 times with the semi-colon replaced with a space (so it is the same length)?

Guess what? The processing without the semi-colon took just over half the time of the queries with semi-colons, the average time to process a small query with a semi-colon in took 700ms and the query without the semi-colon took 420ms so much faster but who cares about 300 milli seconds? it is less than 1 second and really won't make much difference in the overall processing time to publish a dacpac.

I thought I would just have one more go at validating a real life(ish) database so I grabbed the world wide importers database and scriptied out the objects and broke it into batches, splitting on GO and either leaving semi-colons or removing all semi-colons - when I had semi-colons in the time it took to process was 620 ms and there were 2403 first chance exceptions. The second run without semi-colons which would likely create invalid sql in some cases - took 550 ms and there were still 1323 first chance exceptions, I think if we could get rid of all teh first chance exceptions the processing would be much faster but ho hum - to handle the first chance exceptions you just need a fast CPU and not to be a process that is being debugged.

SSDT Deploy / Publish Performance

  • Posted on: 1 March 2017
  • By: Ed Elliott

Publishing dacpac's is a little bit of a pain when you have multiple databases, it can easily start to take minutes to hours to deploy changes depending on how many databases and the size of those databases. I wanted to understand more about the publish process and what we can do to speed it up as much as possible so I did some digging and this is a randomish post about how it all works and what we can do to make it faster.

Process Overview

Roughly speaking the process for deploying dacpacs is:

  • 1. Open dacpac (zip file)
  • 2. Open database
  • 3. Compare dacpac to database
  • 4. Come up with a list of changes
  • 5. Generate the T-SQL for the changes
  • 6. Run the deploy script

1. Open the dacpac (zip file)

The dacpac contains the xml representation of the model of the database and the original source code. To be able to deploy changes we need the original source code - the model contains the definitions of the objects such as which columns the table has and the parameters a stored procedure has but not the body of the stored procedure including comments.

The dacpac is a zip file, this means to read from it the deployment must create a file handle to the dacpac and uncompress anything it wants to read (or compress anything it wants to write), the larger the dacpac logically the longer it will take to decompress and also read from disk so there are our first two important things to note, the deploy will need:

  • CPU to decompress the contents of the dacpac
  • Fast disk as it has to read from disk

When you run the publish, by default the model is read from disk into memory, however, for very large dacpac's you might find that you don't have enough memory for it - a windows system with low memory is bad, two things happen:

  • The paging file is used
  • Applications are told to free memory

This is painful for an application and when you get paging, especially on non-ssd drives, you are going to find everything about the whole system slow.

If you have a large model in your dacpac and you don't have much memory then you will probably need to load the model from disk rather than into memory, but you best make sure you have an ssd to run it from!

For more details on how to do that see: (Richie Lee FTW!)

2. Open database

The database contains all sorts of goodies like the version that will be deployed to, the database settings and the definitions that will be compared to the dacpac to see what changes need to be made. If your SQL Server responds slowly then the publish will be affected by that so make sure your SQL Server box is up to scratch and expect slower deploy times on resource limited instances.

3. Compare dacpac to database

I had always wondered how they compared the two different objects so I bust out reflector and used it to have a poke about. What happens is a model is build of the source and target - a model is a representation of each object, it has its name, its properties and relationships to other objects for example a table has a relationship to its columns and each column has a relationship to its type (not a property of the type).

If you think about this it means, for each deploy we actually need to store the model of both the database and the dacpac so doubling the size requirements - this isn't to say an exact double of the size of the source code becauuse what it compares is things like the object properties and relationships but also the tokens that make up the body of the object, so there are a lot more objects that are created around the code. In short if your source code is 100 mb you will need some multiple of 2 * 100mb to upgrade an existing database - I am not sure if there are any figures to show what that multiple is but if you do a publish and you run low on memory or you get lots of garbage collection in your deploy then consider either storing the model on disk or throwing in some more memory.

There are a couple of interesting extra things that happen, in particular to do with the size of code in objects and how much to store in memory, to store in memory compressed or to write to disk. If for example you had a stored proc that was less that 500 characters long, the publish will store it in memory.

If the string representation of a procedure was over 500 characters but less than 16 thousand characters the it will be stored in memory but compressed first.

Finally if the stored proc (I am using procs as a example but I think it is any code unit minus the create + name so everything after "as") is over 16 thousand characters then a temp file is generated in your user temp folder and the contents are written to that.

So three different behaviours depending on the size of your objects:

  • Under 500 chars - you need memory
  • Over 500 chars, under 16,000 you need memory and CPU
  • Over 16,000 chars you need memory and fast disk for your temp folder

I created three demo databases and deployed them over and over, the first contains a couple of hundred procedures with slightly under 500 characters, the second with slightly over 500 characters and the last with slightly over 16,000 characters.

When I ran the publish this is what they looked like:

Under 500 char procs:

under 500 chars procs, fastest to deploy

Slightly over 500 char procs:

just over 500 chars procs, similar to under 500 chars but includes some time to compress some strings

What we can see here is that the under 500 char procs, takes the least amount of time but the interesting thing is the database with slightly over 500 chars procs has a little bit of extra umpf from the CPU - I have showed this by using the pink circle - both deploys follow the same pattern for sqlpackage.exe CPU usage - it goes from around 90% cpu up to 100% and then drops down sharply to around 80% and then back up to 90% but the second project with the slightly larger procs - also has an extra splurge of CPU which I am putting down to the extra compression that those strings are getting - it certainly tallies up, even if it is not scientific :)

Aside from the extra CPU usage they are pretty similar, the black highlighted line is garbage collection and the way this counter works is every time it changes, it is doing some garbage collection - the first version does I think 3 collections and the seconds does 4 which sort of makes sense as once the strings has been compressed the original strings can be thrown away.

It is when we start to look at the database with lots of objects over 16,000 characters long we see some interesting things and some other forces come into play:

over 1600 chars procs, lots of disk activity

The first thing to note is the garbage collection, there are like 8 changes to the line so 8 sets of garbage collection that happen which is expensive for a .net app. We get much more processor usage for a more prolonged time and we really start to see some disk activity, write activity just has one peak while the files are written but reading stays quite high throughout the whole process. It could be that it is sql reading from disk (my demo was from a single ssd with a local sql instance) but we can see after the sql cpu settles down the green disk read line stays high so it is unlikely to be pure sqlserver.exe.

What does this tell us?

Well if you have large objects, multi-thousand line stored procs then you better get yourself some serious ssd's to deploy from. If you have smaller objects then you'll need CPU and memory - don't scrimp here!

4. Come up with a list of changes

So when the properties and tokens in the source / destination object have been compared they are added to a list of modifications - this is fairly straight forward but that is not the final list of modifications because there are lots of options to customize the deployment such as ignoring certain things - what happens next is that there is some post-processing done on the list to remove anything that the options say should be removed.

We then get the opportunity to interrupt the deploy process with a contributor and remove steps as we want - if we do that anything we do is overhead so be warned, be fast in what you do or slow down the whole process!

There are some things like default constraints, these are stored differently in SQL Server to our project - so when a difference is found, as part of the post-processing each constraint is "Normalized" using the script dom and a visitor to find out if the changes are actually different or just appear different - this means that we don't keep getting constraints deployed over and over (a relatively new feature as this was something that used to happen a lot - a great new feature added by the SSDT BTW!) but it does mean for every default constraint we need to go through this normalization using the script dom which isn't exactly lightening fast.

Takeaways from this are that the deploy does a lot of things to help us - if it was a straight string comparison then it would be faster but the deploy options would not be as configurable. The more objects you have, the longer the deploy will take!

5. Generate the T-SQL for the changes

6. Run the deploy script

I'm going to stop this post here - I think these are a bit more straight forward, the more things you have to change, the more time it will take :)


If the deploy is slow, use perfmon measure the cpu, disk and garbage collection of sqlpackage.exe (or whatever is doing the deploy) - have a think about whether streaming the model from disk would be better and if you are CPU bound add a more powerful CPU, if you are low on memory, add more memory and if you are on an old slow disk - put it on an ssd.

If you are low on CPU you might be tempted to add more CPU's - understand that a single deploy happens in serial so adding more CPU's won't make a deploy quicker if this is the only thing running on a box - if it is fighting for resources then more CPUs may help.

Why is SSDT so slow?

  • Posted on: 2 June 2015
  • By: Ed Elliott

I see the same question again and again "I am using SSDT but it takes a long time to publish changes, how do I make it faster?" or the ever useful "SSDT is too slow".

SSDT is slower than the alternatives at publishing changes because it does so many cool things

Anyone who has used SSDT knows that doing a publish can take time, how much time typically varies depending on a number of factors:

  • Is the database local or on the other side of the earth (high latency / possibly low bandwidth)?
  • How many objects in your database
  • How many objects have changed in your database
  • These seem obvious, the slower the link the longer it will take to get the data SSDT needs to generate the list of changes that are required. If you have more objects or lots of work (changes) to make then it will take longer.

    What is not so obvious is what SSDT is doing and whether the time it takes to publish is worth it. If you can script out a procedure in SSMS, change a part of it and then deploy it using F5 in a matter of seconds, why wait longer in SSDT? If the publish time is a few seconds then fine, but if it is 30 seconds, 1 minute, 5 minutes, 30 minutes? What is too long for you to stop using SSDT and go back to SSMS?

    Personally I think it is worth the time it takes to publish using SSDT because of the benefits you get but I do use shortcuts to speed up my development as even waiting 30 seconds every time you want to test your code is very distracting and really slows down development. That being said, there are issues with SSDT taking 30 minutes in the SSDT forum - 30 minutes is definetly too long, if you are getting that then I would really find out what is wrong and fix it as that is unacceptable.

    Why is it worth it?

    When you use SSDT you get things like reference checking and the ability to run validation rules like the T-SQL code smells which means that every time you build you are validating the code, when you modify the T-SQL and deploy it directly to SQL Server you are not getting that compile time checking - that is bad, consider the case where you drop a column that stored procedures rely on - you can easily to it in SSMS but SSDT will not deploy if you do that.

    When you publish your database using SSDT you are testing SSDT and making sure that the upgrade scripts always work. When you do this you gain confidence that the tooling works correctly so you can do "the right thing" (TM - me 2015) and use continuous integration and move towards continuous deployment with confidence.

    Remember I am only talking about publishing changes, there are lots of other benefits of using SSDT aside from these but these are why I continue to use it even though it isn't as fast as using SSMS.

    What can I do if it is too slow?

    If it is way too slow, like 30 minutes to publish then raise an issue on connect I would also raise it in the SSDT forum and ask for help (link to the connect item) - the SSDT team are active in the forum and really good at helping people.

    If it isn't way too slow, just too slow (slow, too slow, fast and too fast are entirely dependent on you) then generally you can:

    • Look at composite projects
    • Investigate and fix warnings
    • Limit changes between publish
    • Look at composite projects

      Investigate and fix warnings

      Each warning takes time to find and produce, if you have one it won't be noticed but if you have hundreds then it will, fix them and be happy you are a) speeding up the build time and b) you are fixing the warnings in your project, go on dude you da man!

      Special note on performance for tools writers - if you are testing an app you are writing in visual studio that uses DacFx to parse a project or dacpac then the ScriptDom uses Antlr to parse the T-SQL and it throws a lot (and I mean a lot) or 1st chance exceptions that are caught but running the ScriptDom under a debugger will cause lots of backwards and forwards between the debugger and the app which causes it to be very slow - either detach or have a small dacpac otherwise it is painful. Note, this paragraph is just for tools writers, publishing normally although launched from visual Studio is not run as a debuggee so you don't get this issue.

      Limit changes between publishes

      If you have a thousand tables and procedures that have changed it will be take time, that should be fairly obvious. If you do have lots of changes then why did you leave it so long, come on release early, release often.

      Bypass publish

      What is this madness you speak of? Bypassing the publish process is a great way to get SSDT as fast as using SSMS. There are two general routes you can take, the first is not recommended by me the second is. The first way is that you can do your development in SSMS and when you are finished do a schema compare and deploy back to your project, I don't like it as I think it is unnessesary and you won't improve SSDT if you don't use it and learn how to use it. SSDT is a major change in the way SQL Server developers must approach their development but it leads to greatness (ok better software quality in the end but greatness sounds better). The second way is one of the Visual Studio addins that allow you to right click a procedure in solution explorer and deploy it directly to your dev database without going through the publish, I have one ( and Dave Ballantyne has one ( - I would use Dave's as I have seen the code in my one ha ha. If you go down this route, you should include in your workflow a full publish/run tests before checking in.

      Final note

      Think about why you are using SSDT, find ways to make it fast enough that you are happy with it. SSDT really is a great tool and the more people who use it and benefit from it the better it will get!

Site Search with Duck Duck Go