Ed Elliott's blog

Learn how to unit test SQL Server T-SQL code

  • Posted on: 5 September 2017
  • By: Ed Elliott

A free email course on how to use tSQLt including the technical aspects of writing unit tests AND the art of writing repeatable, useful unit tests for even the most complicated T-SQL code

UPDATE: I thought that if I got a certain number by October 1st I would run the course but in two days I had three times the amount of people subscribe than my initial target so I have closed the first course, sign up if you want to join the wait list or the next course!

Unit testing helps us to write better code, make rapid changes to our code and has been generally seen as a good idea for about 10 years. Writing tests for T-SQL code is made much easier by using tSQLt but there is quite a high barrier to entry both in terms of the technical skills in getting tSQLt running and also how to approach large code bases of, sometimes, unfriendly T-SQL code and taming the code with unit tests.

I have successfully unit tested T-SQL code in a number of different environments including clean greenfield environments as well as legacy projects and I have written this course to help people get started with unit testing but also help them to turn unit testing into a part of their development process that they can use everyday to improve the quality of their work and the speed at which deployments can be made.

Are you any of these people?

  • An application developer experienced with other testing frameworks for testing application code?
  • A T-SQL developer with no testing experience?
  • A -TSQL developer with testing experience in other languages or frameworks?

If you are then you should sign up (https://www.getdrip.com/forms/317903840/submissions/new) and let me help you learn tSQLt unit testing for SQL Server.

Why an email course?

I thought it would be an interesting way to provide actionable information regularly and to allow a level of assistance and feedback that I don't think is possible with blogging or writing articles.

How do I sign up?

Run over to: https://www.getdrip.com/forms/317903840/submissions/new and pop in your details.

The course is going to start on the 1st of October and as it is the first one I am limiting the amount of people who can start it. If the first one is a success then I will run it again but it won't be until at least 2018.

What will be the format?

The course will be one email a week which will include an overview of the weeks topic, some detail into the parts that need it and an excercise for the week which can be done on a demo database or any SQL Server database code you have.

and it is free?

yep, gratis. I am not open sourcing yet - maybe in the future but the course itself is free, aka "no service charge"

SSIS ForEach Enumerator File Order

  • Posted on: 4 September 2017
  • By: Ed Elliott

I saw on slack recently a question about how the ssis file enumeraror orders (or more specifically doesn't order) files. I have been thinking about ssis quite a lot lately and whil I am in no hurry to start using it day to day it is quite an interesting tool.

So anyway, I saw this question that went like:

"does anyone know in what order files are processed in 'Foreach File Enumerator'?
I used to think it was alphabetically but after some testing this is not always the case?
Second part is there anyway to specify the order by say size or date?"

So how does SSIS order files or doesn't order files?

The answer to this is pretty simple and I thouhgt I knew the answer but wanted to confirm it. In my mind I thought, "how do they get a directory listing?", and my mind responses "probably using the win32 api's find file etc", my mind then wondered somewhere else before writing a quick package that:

  • 1. Has a ForEach loop and a breakpoint set at pre-execute
  • 2. Has a single task in the ForEach loop and a breakpoint set at pre-execute
  • 3. A variable to hold the file name

Pretty simple, the ssis package looked like:

Pretty simple hey :)

I set the file path for the enumerator to c:\ssisSearch and put a load of files and directories in (because the win32 find functions have a buffer and you need to call it multiple times - I wanted to make sure we covered cases where there were multipl find calls). Then I reached for my favorite tool of all procmon.exe (I say favorite, it used to be then I had a job where I used it literally every single days for hours and hated it so stopped using it but now i'm back with it!). In procmon I set a filter on the c:\cssisSearch folder and also DtsDebugHost.exe and ran my package - the files were returned in alphabetical order.

I then went into procmon and to the properties of the "QueryDirectory" operation on that folder and (when the symbols had loaded) I could see that the call ssis was making was from the ForEachFileEnumerator.dll (native not .net so we can't grab reflector) and that calls "FindFirstFileW".

A quick hop skip and jump to msdn and FindFirstFile discusses the ordering of files here:

The FindFirstFile function opens a search handle and returns information about the first file that the file system finds with a name that matches the specified pattern. This may or may not be the first file or directory that appears in a directory-listing application (such as the dir command) when given the same file name string pattern. This is because FindFirstFile does no sorting of the search results. For additional information, see FindNextFile.

FindNextFile has this:

The order in which the search returns the files, such as alphabetical order, is not guaranteed, and is dependent on the file system. If the data must be sorted, the application must do the ordering after obtaining all the results.

So basically ntfs is alphabetical, fat date but don't rely on either.

Just a final thought, ssis runs on linux so no idea the order there :)

TSQL Tuesday - Databases and DevOps

  • Posted on: 13 June 2017
  • By: Ed Elliott

DevOps isn't running SQL Server in a container and pushing code to it from Jenkins

When we talk about DevOps we envision that we have the ability to check-in code, spin up a new environment, deploy, test and push that code to production and be in the pub at 4.

We know that by having the right tooling in place we can make releases more reliable and more frequent enabling us to deploy the changes that the business want when they want them rather than every x days/weeks/months/years/decades. This outcome is best for everyone, no one loses and the path to fun and profit is that, fun and profitable.

So what do we need to do, run SQL Server in containers and write and deploy our code using SSDT? Yes do it, but you don't need to you can do DevOps and work on doing frequent releases with a standard sql server instance and manually written deploy scripts that are emailed around.

So what is DevOps if you can do it without source control?

DevOps is about enabling frequent releases - that is the core element of it and to enable frequent releases we need:

  • A way to deploy code (a DBA wearing out the F5 key in SSMS is a way to deploy code)
  • A way to be confident about the changes we are about to make (hint tests, lots of them)
  • A way to know when there is a problem with production (monitoring and alerting)
  • The ability to identify bottlenecks, work together and make improvements to the process

The last point is most important, for me it stems from kanban and the kaizen approach of identifying bottlenecks and working together to remove the bottlenecks.

If you look at your existing approach to making changes what are your bottlenecks? How can these be improved? When you deploy changes and they go wrong what stopped you finding out about those problems earlier? When you look at the different stages of a change from business analysis to troubleshooting issues reported by customers, how many of those and how much time and money could have been saved by not having that issue or by identifying it in developer tests or when it was rolled out rather than when the user complained about it.

If you truly start looking at bottlenecks in your entire release process it will more than likely lead you to an end position of a DevOps culture and practices including the tools required to do it but without the underlying kaizen approach, to continually remove bottlenecks in your processes, you will simply pay for tooling you don't need and covering your laptop with stickers but not deliver the value that the business needs.

Which one of these are you?

or

SSDT: Unable to connect to master or target server.

  • Posted on: 12 June 2017
  • By: Ed Elliott

error in sssdt: Unable to connect to master or target server. the server displayed isn't the server but the database

Every now and then I come across this error in SSDT, normally when trying to publish and the odd thing is that the description never matches the actual cause (if you desperate for the cause it is because you can't connect). The thing I like about the description is the way it tries to tell you what is wrong and what server you are connecting to but it fails at both and instead tells you about an unrelated error and a database name instead of a server name.

What we have is:

"Unable to connect to master or target server '{0}'. You must have a user with the same password in master or target server '{0}'.\"

Twice it tries to tell you the server name but both time it actually tells you the database name. I thought I would dig into it a little with reflector to try and see where the error is coming from and whether or not it would ever show the servername. So in reflector I found the error and found where it was used. If we look in, what is surely to be everyone's favourite ssdt dll Microsoft.Data.Tools.Schema.Sql.dll, we can see that the error message is used in SqlDeploymentEndpointServer.OnInit and we have something like:


catch (ExtensibilityException exception)
{
Tracer.TraceException(TraceEventType.Verbose, TraceId.CoreServices, exception, "Error loading DSP families");
throw new DeploymentFailedException(string.Format(CultureInfo.CurrentCulture, DeploymentResources.InvalidServerEndpoint, new object[] { targetDBName }), exception);
}

they are indeed passing targetDBName into the InvalidServerEndpoint error message so yes indeed the error message will only ever show the database name.

I had a quick look at what can cause this and it is something to do with opening the SqlConnection which is wrapped in lots of retry logic that is different for Azure compared to other types of SQL - lots of interesting stuff maybe for another post but basically SSDT wasn't able to open a connection - check the server name, ports, database etc (i.e. maybe your default database is not available to that user), connect via ssms then when it works there come back to ssdt.

Footnote, I did think about raising a connect but then couldn't be bothered, if anyone does i'll be happy to vote for it!

SSDT DevPack + Visual Studio 2017

  • Posted on: 8 May 2017
  • By: Ed Elliott

I have upgraded the ssdt dev pack to support visual studio 2017 and fixed a couple of minor annoyances and have started to think about how to improve it going forward.

The first feature of sorts is the ability to clear the connection used by the quick deployer without having to restart visual studio.

Secondly I am pretty convinced that the main thing people use it for is the quick deploy, the tSQLt test builder and I use the keyword formatters so I have moved everything else to underneath a menu item saying "deprecated" - if anyone really wants one of those features then let me know and I will keep it but I will likely remove them at some point.

I am planning on bringing SQLCover into it at some point and I haven't really been using the mergeui part, I think the better thing is to use sp_generate_merge to generate a merge statement it is much more reliable. If you have a simple table then MergeUi might still be useful to you.

I will publish it to the market place so updates happen automatically, if you want a copy of it go to:

https://visualstudiogallery.msdn.microsoft.com/435e7238-0e64-4667-8980-5...

Before I published it I realised that when I wrote the code (a couple of years ago!) that I had taken a couple of shortcuts, one was to always use the 120 version of the parsers instead of whatever the project was set to so I decided to fix that first and then publish - looking over old code is never good ha ha.

It is now published so you should be able to install for vs 2017 and existing installs on 2015 will be upgraded (hope it doesn't break for anyone, there are lots more users than when I last published it!)

SSDT Dev in Visual Studio Code

  • Posted on: 27 April 2017
  • By: Ed Elliott

I have been quite interested by vs code and have been using it more and more recently. I use it for all my GO (#golang FTW) work and also powershell and I have been toying with the sql tools team's sql extension which is great. For a long time I have thought about bringing the SSDT experience to other IDE's like Jetbrains IntelliJ but because I have been using vscode quite a lot recently and separately I have been doing more and more javascript and typescript I thought it would be interesting to see how hard it would be to write a vscode extension that lets me build dacpac's.

The general goals of this are not to re-created the ssdt experience in visual studio but to provide a lighter, faster way of developing sql code, if I can have an extension that:

  • is fast
  • is light weight - memory is important for dev machines and 2 gb for a large db project is limiting
  • gives us the important things like pre,post deploy scripts, refactoring and the ability to generate dacpac

I am not really interested in providing ui's like the schema compare - for that use SSDT or spend some money on the Redgate tools.

I am also not interested in replacing what the sql tools team are doing, I am happy to leave them to do the harder, important but less interesting things to me like t-sql formatting so with that in mind I have started a new project that is very hacky at the moment, more an experiment to see if it will work but a vs code extension that builds dacpacs:

https://github.com/GoEddie/vscode-ssdt/

This is basically just a wrapper around the DacFx so there shouldn't be anything too hard and also because it is windows only for now (until the DacFx is cross platform it will only ever be windows, but I hold out hope for cross platform DacFx one day!).

This works similarly to the sql tools team extension in that there is a .net app that is called by the vs code extension (typescript running on node.js) so if you wanted to try this, download the repo, run the SSDTWrap exe (not under a debugger or you will face t-sql parsing first chance exception performance hell). Then in vs code open the folder "src\s2" and the "extension.ts" file and F5 - this will open a new vs code window - open a folder with your .sql files and it will create a t-sql model and report any errors.

If you do ctrl+shift+p to open the command pallette and then do "build dacpac" it will generate a dacpac for you from the sql files. You will need to put this ssdt.json file in the root of the directory you open in vscode:


{
"outputFile": "c:\\dev\\abc.dacpac",
"SqlServerVersion": "Sql130",
"references":[
{
"type": "same",
"path": "C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\Common7\\IDE\\Extensions\\Microsoft\\SQLDB\\Extensions\\SqlServer\\140\\SQLSchemas\\master.dacpac"
}
],
"PreScript": "",
"PostScript": "",
"RefactorLog": "",
"ignoreFilter": "*script*",
"PublishProfilePath": "C:\\dev\\Nested2\\Nested2.publish.xml"
}

It doesn't really support a lot of things at the moment but I will add in the support needed to build a dacpac including refactorlog.xml, pre/post deploy scripts and references that we all know and love.

I tested with a folder of 2000 procedures, I tried testing 10,000 but I couldn't get ssdt to load them into a project without crashing (on a top of the range i7, 32gb ssd etc laptop) - in the end I settled for 2000 procs and to build the dacpac the times were:

App time (milliseconds)
Visual Studio / SSDT 5630
VS Code 2051

so as well as handling larger projects it is faster as well, a small project (one proc/table) was about 17 seconds to build the dacpac.

Anyway, it is all a bit of fun and pretty hacky at the moment but I like using vs code anyway and am finding it much more light weight than visual studio so will likely invest some more time in it.

If you feel like trying it out, good luck :)

Updating TSqlModels (DacFx)

  • Posted on: 26 April 2017
  • By: Ed Elliott

This one is for the DacFx nuts out there, it can't be a very big club but judging from the occasional emails I get about it, the quality is very high ha ha.

If you have a TSqlModel and you want to make a change to it, you have a couple of choices:

- Create a new model and copy over everything
- Use AddOrUpdateScript

AddOrUpdate can only update scripts that you add (i.e. Know the name of) so if you get the model from a dacpac you are out of luck.

I recently wanted to remove a file from a model I created so what I ended doing was to re-add a script with this contents:


--

I didn't try an empty file but this let me remove objects from the model quite nicely.

On another note I found something quite interesting that I missed when building a model.

When you parse a script you get a list of parse errors and the script object, you can add the script object to the model and call validate and even though you got parse errors the model validate might still pass (or parse waa haa haa haaaa).

The total errors are script parse errors plus model validation errors which seems obvious now but I missed it.

Hope it helps someone.

Happy DacFx'ing everyone!

Tags: 

My SQL Server Development Team Maturity Levels

  • Posted on: 24 April 2017
  • By: Ed Elliott

A teams maturity shows in its choice of tools.

I have seen quite a few different development teams in wildly different environments and the single fact that really stands out is that you can tell how good a team is by the tools that they use. It isn't always the specific choice of which tools they use, although that can be important, it is the fact that they evaluate and chose to either use or ignore new tools.

This is basically my personal maturity model for sql server developers, I think it is quite important because it is a measure of effectiveness of a team. It should be pointed out that some teams have no need to be effective whereas other teams are vital to how an organisation runs.

If we take a few examples, the first shows where a team has no need to be mature at all:

Team one, no one cares

Team one supports a vendor supplied application and the database is SQL Server, the vendor supplies an upgrade script to run every 3 months and the team is not allowed to make any changes to the application. In this scenario, there isn't really any benefit in the customer having the database in source control - any problems are dealt with by the vendor and any scripts can equally be run in SSMS or sqlcmd.exe or a custom application. Even though the application is critical, the vendor supplies all support for the application and the team just need to keep it updated.

The second one, is a team that is important.

Team two, everyone cares

Team two develop and support an in-house electronic medical record application. It was first written in the late 90's and has evolved to a include a number of SQL Server databases and .net services. All the code from the .net services, the databases and the tooling that the developers use is critical and hacking together a release is not going to wash it when a nurse might not be sure if something needs to be adminstered to a patient or not.

Team three, not critical but the team care

Team three develop a website that generates revenue for the company. Revenue is the most important factor but the team has good people in it who care and want to do the right thing for the company, not at the loss of revenue but with the idea to increase revenue and deployments.

Maturity Levels

OK so this is pretty simple, we have these levels:

  • Low
  • Medium
  • High

Wow. Just WOW

That is an amazing list, how did you come up with it? Did it come from some phd study on the effectiveness of lists in the internet age? No.

So a little more detail...

Low

The low maturity team is one that uses basic tools or no tools at all. They probably do all their development in SSMS without any of the refactoring or intellisense helpers that are available. If they do use source control then their database is not likly deployable in any form just purely from source control.

This could well be perfect for team one, there is literally no need for them to spend any time on anything that other teams might require as an absolute basic. There is nothing wrong with this if the development and deployment is genuinely not important.

If you have a database and you don't care about it then the best maturity level for you is low. Don't feel bad about it, it is awesome in its own way and we love awesome.

Medium

This is where it starts to get a little bit exciting, the team probably has some tooling - they might have sql prompt for ssms or use another ide like jetbrains datagrip. For deployments they could use a tool such as redgate readyroll or perhaps they have written their own deployment tool.

Why is ready roll in the Medium maturity level?

The main reason is that in a medium maturity team deployments, both creating and managaging them are critical to how the team develops and deploys changes. Readyroll helps teams to generate deployments, it is a tool to manage changes and deployments.

Thinking about deployments and having deploying changes is not necessarily a bad thing - if you are not team one then you should absolutely be thinking about deployments and if you did have a low maturity it is a great step to getting more mature in your SQL Server development.

Why is datagrip in the Medium maturity level?

Ok, since you ask - if it was another database then I would put it in the advanced section but the tooling for SQL is great so unfortuntly it goes into Medium. If there was a genuine reason why a team used it I would be tempted to throw them a High maturity level bone but no guarantees.

High

For a high maturity team I want to see good IDE's being used - so SSDT or datagrip or something similar. It doesn't have to be SSDT but if not then there needs to be a valid reason why. I also want to see code checked in (not changes checked in) and then the code being built, tested and deployed either straight to production or possibly prepared to be deployed later. If it is to be deployed later I want to see a plan in place that will get them to continuous deployment.

Where do teams two and three fit?

I would hope between Medium and High - if not you have to ask questions as to why not. It is 2017 and there are pleanty of resources available out there.

Show me a chart

This is really begging for a cool matrix helping people show what all the criteria are (there must be more of course) and where they fit, but hey, this is a low-medium maurity blog so maybe one day in the future.

SQL Server Continuous Deployment "In a Box"

  • Posted on: 7 March 2017
  • By: Ed Elliott

What is this?

Well if you read the name aloud "SQL Server Continuous Deployment in a box" then, if I have done my work correctly choosing the title for the blog, give a hint :)

what is the big idea?

There is really some great tooling for SQL Server - second to none really when it comes to RDBMS's and setting up Continuous Deployment pipelines is actually pretty simple once you know which parts to plug together. I even did it in 55 minutes once (https://www.youtube.com/watch?v=9YJQTx3bPek).

What I wanted to do was to allow people to spin up a vm, install ssdt (or visual studio with ssdt), install a local dev instance of SQL Server (or use localdb), run a script and add some parameters and have everything they need to be able to make changes to some code and have that code automatically deployed to a production database.

Now a little word about that word "production", you could set this up to point to your production database but what I would suggest for the demo is that you use a copy of your production database or something which you will call "production" - the tools here can all be used in a real life setup but you wouldn't normally host everything on your development machine.

How does it work?

The idea is that anyone who can download from the internet can do this, so setup the pre-requisites (ssdt and sql) and then either clone the repo (https://github.com/GoEddie/SQLServer-Continuous-Deployment-In-A-Box/) or download the latest zip from:

https://github.com/GoEddie/SQLServer-Continuous-Deployment-In-A-Box/arch...

Note to get the zip, you don't need to use git or sign up for an account or anything (other than clicking the link)

Once you get that then extract the folder, open powershell as an administrator, change to the src folder and run these two little commands:

Unblock-File *.ps1
.\ContinuousDeploymentFTW.ps1

What does this do?

Unblock-File *.ps1 - removes a flag that windows puts on files to stop them being run if they have been downloaded over the internet.
.\ContinuousDeploymentFTW.ps1 - runs the install script which actually:

  • Downloads chocolatey
  • Installs git
  • Installs Jenkins 2
  • Guides you how to configure Jenkins
  • Creates a local git repo
  • Creates a SSDT project which is configured with a test project and ssdt and all the references that normally cause people problems
  • Creates a local Jenkins build which monitors your local git repo for changes
  • When code is checked into the repo, the Jenkins job jumps into action and...

If you check into the default branch "master" then Jenkins:

  • Builds the SSDT project
  • Deploys the project to the unit test database
  • Runs the tSQLt unit tests
  • Generates a deployment script for the "production" database

and what you have there is continuous delivery in a box, now I know that isn't what you were sold by the title but I wanted to show a couple of different approaches to this so if you use git to create a release branch and check-in on it by changing to the directory with the SSDT project in powershell and doing:

git checkout -b release

Make a change and then...

git add .

git commit -m "a change that will go straight to production \o/"

You will see that a "Release" jenkins build is created automatically because the job we set up initially is a "Jenkins Multi-branch pipeline" - don't worry about that but what you see is Jenins:

  • Builds the SSDT project
  • Deploys the project to the unit test database
  • Runs the tSQLt unit tests
  • Deploys the SSDT project to the "production" database

Nice hey?

Why the choice of technology?

SSDT - this doesn't need ssdtm you could do this with readyroll, dbup etc anything
Git - aren't most people moving to git nowadays?
Jenkins 2 - for the multi-branch pipelines which means it automatically creates builds from the Jenkinsfile which is checked into source control

Sounds hard to setup?

It isn't all you need to do is configure Jenkins, create a user and give my script the username and token and also the connection details to the unit test and production databases. If you like when you get the SSDT project you can import from your production database which will then be deployed to your unit test database, or you can leave it empty, or add a couple of objects - whatever suits you!

Prerequisites

I would create a VM, install SSDT or Visual Studio with ssdt in (2015 or 2017), install a local SQL Server 2008+ and restore a copy of your production database that should be it.

I made a video to show the awesoness of all of this:

https://the.agilesql.club/assets/videos/SQLCDINABOX.mp4

I made the video to see how much fun it was to make videos, it was very fun but this will be the only one ;)

Enjoy and good luck!

ScriptDom parsing and NoViableAltExceptions

  • Posted on: 2 March 2017
  • By: Ed Elliott

If you have ever tried to debug a program that used the TSql Script Dom to parse some T-SQL you will know that the process is extremely slow and this is due to the volume of NoViableAltExceptions (and others) that are thrown and then caught. Because these are first chance exceptions they are being handled and it is the way that the script dom interacts with Antlr and the Lexer that they use. When you debug a program what happens is you have two processes, process one is the debuger, this starts (or attaches) to process two, the debugee.

The debugger calls a windows function WaitForDebugEvent typically in a "while(true)" loop (everyone should write a windows debugger at some point in their lives you learn so much, in fact put down ssms and go write your first debugger loop: https://msdn.microsoft.com/en-us/library/windows/desktop/ms681675(v=vs.85).aspx). The debugee app is then run and when something interesting like an exception or a dll is loaded/unloaded the debuggee is paused (i.e. all threads stopped), then WaitForDebugEvent returns and the debugger can look at the child process and either do something or call WaitForDebugEvent again. Even if the debugger doesn't care about the exceptions the debugee is still paused and when you parse T-SQL under a debugger, even if you tell Visual Studio to ignore the exceptions, the debugee is still paused for every exception just so Visual Studio (or your home baked debugger) can decide that it wants to ignore that exception and the debugee is started up again.

What this means for an app that throws lots of first chance exceptions is a constant start, stop, start, stop which is so so painful for performance - it is basically impossible to debug a TSql Script Dom parse on a large project, I typically debug a project with like one table and one proc and hope it gives me everything I need or do other tricks like letting the parsing happen without a debugger attached then attach a debugger at the right point after the parsing has happened but then again I don't have to debug the TSql Lexer's!

So where is this leading?

I was wondering what effect these first chance exceptions had on T-SQL and even in normal operations where we don't have a debugger attached, is there something we can do to speed up the processing?

The first thing I wanted to do was to try to reproduce a NoViableAltException, I kind of thought it would take me a few goes but actually the first statement I wrote caused one:

"select 1;"

This got me curious so I tried just:

"select 1"

Guess what? no NoViableAltException the second time - this didn't look good, should we remove all the semi-colon's from our code (spoiler no!).

Ok so we have a reproducable query that causes a first chance exception, what if we parse this like 1000 times and see the times and then another 1000 times with the semi-colon replaced with a space (so it is the same length)?

Guess what? The processing without the semi-colon took just over half the time of the queries with semi-colons, the average time to process a small query with a semi-colon in took 700ms and the query without the semi-colon took 420ms so much faster but who cares about 300 milli seconds? it is less than 1 second and really won't make much difference in the overall processing time to publish a dacpac.

I thought I would just have one more go at validating a real life(ish) database so I grabbed the world wide importers database and scriptied out the objects and broke it into batches, splitting on GO and either leaving semi-colons or removing all semi-colons - when I had semi-colons in the time it took to process was 620 ms and there were 2403 first chance exceptions. The second run without semi-colons which would likely create invalid sql in some cases - took 550 ms and there were still 1323 first chance exceptions, I think if we could get rid of all teh first chance exceptions the processing would be much faster but ho hum - to handle the first chance exceptions you just need a fast CPU and not to be a process that is being debugged.

Pages