CI

You have your database in source control now what?

  • Posted on: 16 November 2017
  • By: Ed Elliott

This post is for a specific type of person if you are:

  • New to source control
  • Are getting started on your path to the continuous delivery nirvana
  • Have been able to get your database into some sort of source control system
  • Have more than one person checking in code to the database source
  • You are unsure what yo do next

Then this post is for you!

Choosing a source control system and tools to manage your database code is a great hurdle to have got past, well done!

Actually getting your database into that source control system is another difficult hurdle to get last, double well done!

Now you should be starting to think about the next steps, and the sort of things that are on the horizon is, not immediately but not too far away (1-6 months):

  • Generating deployment scripts
  • Provisioning test databases to deploy to
  • Writing some tests
  • Automating the running of those tests
  • Using a build server to do all this for you
  • Sacking your dba's (jokes ha ha)

But what about now?

The thing that I would do now is to:

  • Determine your branching and release strategy
  • Start checking in and merging changes
  • Generate/gather deploy scripts
  • Nothing else

These things are so important to get right early on. It is hard to choose a correct system until you are where you are now. Until you have chosen a process and started using it, it is hard to know whether it works for you.

Determine your branching and release strategy

How are you going to manage your code while changes are going on? Typically you will want to be able to:

  • Check new features into the code repository
  • Check-in hotfixes to the current production and not lose those changes when the latest features are checked in
  • Allow multiple developers to check code in at the same time

There are a few different approaches, and my favourite is the git approach of creating a branch for each feature then merging back to master when the feature is ready to go to production but this favours getting changes out to production pretty quickly so you may not be ready for this yet.

Have a look at different branching strategies that your source control system uses and decide on one that you and your team can understand and work with.

When you have decided on your branching strategy, stick to it and make sure everyone follows the system. If you have had no source control and suddenly have source control it takes quite a lot of discipline as a development team to ensure you follow the process. Someone will forget to check something in, and someone will skip the process. Keep on eye on check-ins, make sure everyone follows the process - it is a big change in how SQL developers work so understand that this in and of itself is a major change for your team.

Start checking in and merging changes

Checking in code is the next step and having multiple developers means that you will have to start merging each other’s changes. Let the developers check in code and merge their changes, try out different tools for merging changes. If you have TFS, you probably have visual studio which has a decent merge tool built into it. If you are using git look at SourceTree or git Kraken.

You will get problems when you merge your changes, do the first few merges together and see where doing things like reformatting long stored procedures causes extra changes that are more challenging to deal with when other smaller changes to the procedures are merged.

Generate/gather deploy scripts

The next thing you will want to do is start to see some value from all this work, and I’m not suggesting that you start pushing all your changes to production yet (you haven’t even any tests yet!). Whatever type of tool you change chosen (automatically generate scripts/manage migration scripts etc.) find a way to generate those or gather them together, so you no longer have to spend x hours every release “gathering the scripts”. This starts to show value and has the benefit that people can’t bypass the source control system easily.

Nothing

I don’t mean actually nothing, get on and do some actual work (you lazy little...)! What I mean is nothing more on your ci/cd process for now. You have a good 1-3 months work to do to perfect what you have, to work and get this bit nailed :)

Video of #sqlrelay talk - build a ci pipeline in 55 minutes

  • Posted on: 9 October 2015
  • By: Ed Elliott

My session yesterday was recorded, apart from the fact that me and mark were having a chat for about 25 minutes before it kicked off it seemed to work quite well:


Skip forward to 24:32 to get to the actual content.

The lovely Mark at SQLRelay has edited the video and put it on youtube so you miss our chat at the beginning:

https://www.youtube.com/watch?v=9YJQTx3bPek

Any questions please shout!

Setting up a new CI server to build SSDT projects

  • Posted on: 23 September 2015
  • By: Ed Elliott

I am going to ignore the fact that people don't want to install visual studio on a build server, in the future there will likely be a way to install just the required components rather than everything. Today, the way I see it is that I develop in Visual Studio, I am happy to install Visual Studio on the build server for me Visual Studio is as dependency for SSDT.

Additionally I find it useful to have Visual Studio on a build server to troubleshoot those really odd issues that you sometimes get when a project doesn't build on a build server.

The steps I follow to get a new CI server ready to build SSDT projects are:

Download Visual Studio Express (2015 is good)

https://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx

Click "Express for Desktop"

This will download the stub installer.

Download the full version of express

Because we have just downloaded the stub and we don't want to do that every time or do not have internet access from our build server run:

"wdexpress_full.exe /layout"

This will download the setup files - just under 2gb

Run the installer

On the build server copy the setup files (if they aren't already there) and run "wdexpress_full.exe"

Reboot

Do the reboot, you can't open express until you have done this, just do it.

Testing

Don't be tempted to go off and create a build yet, validate that SSDT works and builds your project correctly so...

Open Visual studio, if it asks you to sign in just say not now thanks.

Open you project in Visual Studio, ensure it builds and creates a dacpac

Once that builds close Visual Studio and open "MSBuild Command Prompt for VS2015" which is a command prompt that can see msbuild, ensure you can build the project and get a dacpac out:

cd path\to\your\ssdt\projects\.sln\

msbuild /t:clean
msbuild /t:build

If you have a dacpac that is created by /t:build then you have a good build server.

What about automating this?

Do it if you want to, personally I would wait for the headless build stuff to come from Microsoft. If you have more than a couple of build servers to do then automate it. If your developers have licenses for Visual Studio then you can install the same version which can run unattended installs of Visual Studio.

Of course better than automating the install is to create container using https://spoon.net and sharing that with everyone :)

I don't use the Visual Studio SSDT Publish

  • Posted on: 19 August 2015
  • By: Ed Elliott

I see questions pretty regularly on stack overflow and the ssdt msdn forum that is some variation on "I am doing a publish through visual studio and it doesn't quite do what I want" or "I want to publish the database when I press F5 except when I don't want it to publish". The second variation I particularly enjoy because I have always though that what I want is a computer that I can think something and it will do it - I guess we are a little way off but one day hopefully that will be possible.

It occurred to me pretty early on in my SSDT usage that publishing through visual studio wasn't really the way I wanted to do my development so what I do is:

  • 1. Write a powershell script that calls sqlpackage.exe to deploy a dacpac to my dev instance
  • 2. Write my code in Visual Studio / SSDT (New code and living the refactoring dream)
  • 3. Build the project, ensure there are no errors
  • 4. Alt+Tab to my powershell session and run the deploy script
  • 5. Watch the powershell script and if it deployes successfully, continue my work

This has a number of benefits:

Benefit #1 - Simplicity

I know exactly when a deploy is going to happen and I know exactly what options it will use. If I want to add a variable or change a parameter then I can do it and it is straight forward.

Benefit #2 - Errors + Warnings

I see quickly and brightly any errors or warnings, I don't really like the publish dialogue in SSDT because it sort of sits at the bottom and doesn't shout when it is finished. I often sit waiting after it has finished and then have to click around for the errors - this isn't a moan about the ui, I am just more comfortable with a command line and a bit of console colouring!

Benefit #3 - Monitor redeployments

If there is an issue like you have a default constraint with a definition in SSDT that SQL changes when it store it (such as adding brackets around things) then it will be constantly re-deployed. By running in a command window I get to see exactly what changes are happening and if I see the amount of things being re-deployed every time I can do something about it rather than the changes being hidden, again unless I really go looking for it.

Benefit #4 - Testing the CI process

This one is what I get most excited about (it really really does!). I use the same script to drive sqlpackage.exe for my local instance that I use on the CI server and other environments to deploy changes and generate change scripts. This means that the most well tested part of my development process is that bit that is often seen as the most risky (i.e. deploying T-SQL changes). It also gives me the chance to really build in confidence that sqlpackage.exe will work and that I can trust it.

On trusting sqlpackage.exe I have used it a lot now, I must have done local deploys numbering in the low thousands and I haven't had any issues that weren't caused by me - the last few versions have been reliable and I am really excited about the future of SQL Server development with SSDT - no other RDBMS has anything anywhere near the capabilities of SSDT and it is free!

Benefit #5 - Obvious where to put deployment contributors

If you are doing SSDT development seriously then you will likely want to use my deployment contributor at some time or another (http://agilesqlclub.codeplex.com/) or another contributor. Deploying the contributors for Visual Studio is machine wide whereas with sqlpackage.exe I can gather all the files, add an extensions directory and add the filter to that. It makes life so much simpler!

Complexity #1 - Extra effort

It isn't all chips and gravy (incidentally I had chips and gravy for the first time recently, it was ok but I am sticking to salt, vinegar and a bit of ketchup).

I need to write the powershell script and run it. It could well be a batch file or an app that uses the DacFX api to deploy the dacpac but I prefer good old sqlpackage.exe

Complexity #2 - Context switching from Visual Studio

I need to alt+tab out of visual studio and run something else. It would be nice to run it all from Visual Studio but after the build I am likely going to run the application or tSQLt tests so I would probably be context switching anyway.

I have played around with getting this step into Visual Studio, I have tried:

  • Using the command window to call powershell - this gets boring quite quickly, the colours are just so dull
  • Using StudioShell - this was the most interesting but ultimately I went back to the simpler approach

Summary

Overall I am really pleased that I use sqlpackage.exe rather than Visual Studio, I can still use the xml profiles and it is a simple way to deploy changes or generate scripts.

Happy sqlpackage'ing!

Shared databases or private databases?

  • Posted on: 17 August 2015
  • By: Ed Elliott

Traditionally database developers have shared a database and while this certainly made sense when everyone had a limited amount of ram and a few hundred megabytes of hard disk space it has been a while since we have had a development machines without enough resources to easily run a few copies of visual studio and SQL Server - if this isn't the case then you really need to re-evaluate the cost of a new machine versus the cost of developers time.

Dedicated databases make many things easier, you do not need to run the installer for SQL Server on everyone's machine and lets be honest the SQL Server installer is one of the worst (the worst?) installers in a Microsoft product that I have seen in a long time - you need to install the setup files before using those to install the product, odd.

Dedicated databases also mean that developers who do not want to know about SQL Server can develop their app code and just point it to the shared database and develop to their hearts content.

Dedicated all the way?

No, that was the past and we are now in the future, while dedicated databases do make some things easier they also cause problems and promote bad practices. The process that I recommend is this:

  • 1. Each developer has their own instance, ideally on their own machine
  • 2. When the developer wants the latest version of the database they check out the code
  • 3. They then use sqlpackage.exe or SQL Compare (or whatever tool they use in the other environments) to deploy the changes to their local database
  • 4. They develop their code however they want (hint use SSDT)
  • 5. When happy the developers check in their changes which kicks off the normal CI process and merges their changes with all the other changes made by other developers
  • 6. When the other developers want the latest version of the database they follow from step 2, rinse and repeat

Why?

This forces everyone to use source control, if it isn't in source control then other developers won't get your changes - the most important part of this is the process that when you check your code in you get to merge your changes with everyone else and you can decide what to do about conflicts. Without source control you get into the situation where changes can be lost or changed while you are developing - this is the main reason that in Redgate's source control you have the ability to lock objects, however in my opinion this is the wrong way to fix the problem, just stop using shared instances.

what else?

What we should all be aiming for as SQL developers is to build an automated deployment process including CI and CD because of the well documented benefits of releasing little and often - if we are to do that then we need to trust the tools and to trust the tools we need to use them, when the developers check out the latest version of the code create a script to deploy the changes to their private database and use this same script when deploying to the other environments and it will quickly become the most well tested part of the deployment which incidentally is often seen as the riskiest part of making an automated deployment pipeline. Now that is a real win.

anything else?

I often see on stack overflow and other forums people asking how to do things like create a report of all the things that are in a shared database but not in source control so they can speak to the developers and tell them to check their code in. There are indeed ways to do this BUT and this is a big BUT, you shouldn't have to tell developers to check their code in, if they don't check code in then it doesn't exist and giving them a shared instance and letting them be lazy is not the right thing to do, just force them to use private databases, there will be an initial bit of pain but in the long run it will make their development process much simpler.

branching

If you have a single shared database for development then it makes feature branching hard, with private instances you check out the branch you want, deploy that specific code to your private database and then you are ready to go, if more than one person wants to modify an object in a different way with a shared database you will keep tripping over each other.

non arguments

If everyone has private instances it makes it hard to backup the databases - the simple answer is just do not backup your development databases, check-in everything you need to be able to rebuild a development database from scratch and if you lose the database throw it away, get into this habit and you will be a winner.

The cost of private instances is prohibitive - you can use localdb or sql express which are free. If you have a developer using a development instance of SQL Server then they are licensed to use as many instances as they want so go wild installing as many development instances as you like (just don't forget each developer needs a pretty cheap license).

The developers don't want to do it - it is a bit of a pain to get started with but if you raise it as a possibility every time someone deletes someone else's change or the development database has a problem (someone deletes all the records etc) then they might see the benefits. If you are more of a dictator than a team member then just force it on them and live the dream.

Keep your environments in-sync and your upgrade scripts fresh

  • Posted on: 21 July 2015
  • By: Ed Elliott

Whether we use a tool to generate upgrade scripts or we write manual test scripts for database changes there are a few things that we should bear in mind to ensure we do not break existing databases.

Why is it important?

When you upgrade an application you just compile and then deploy the code in a single operation, the old code is removed and the new code put in place, simple.

When you have a database you have all this data that you need to keep, you need to ensure that you are collecting the right data and you aren't damaging the data that is already there. For me the data in a database is really this organic thing, it grows and it shrinks. We really need to be careful when upgrading a database that we are doing the right thing, we do not drop a table here or there or update a whole table because of a missed where clause.

Process for managing upgrade scripts

I think there are two things we need to do:

  • 1. Keep all our environments/databases in sync
  • 2. Re-use the same scripts for each environment

1. Keep all our environments/databases in sync

This is an obvious one but that is really so often overlooked - when you have different environments such as dev, test, qa, production etc and all companies have a slightly different set, you really need to make sure that they are the same:

  • When changes are made, push them up through each environment
  • If an emergency fix is made in production, push it back up through from the beginning
  • If a change is rolled back, remove it from everywhere it was deployed to
  • Periodically check that the databases are the same, if they are not then find out how they got in that state and find ways to stop it happening

If you do not make sure that they are the same then when you are developing or testing or "productioning" you have no guarantee that what you are doing will work the same in any other environment which makes testing invalid.

2. Re-use the same scripts for each environment

If the databases are in sync then we can generate our deploy scripts early on in the process, most likely to a qa or production mirror environment and re-use the same script for each further environment until we get to production.

This means that we get the opportunity to validate that not only the change works in each environment but that the upgrade script works in each environment. I would strongly suggest you do this for all deployment methods whether that is manual upgrade scripts or compare + merge using ssdt or the redgate tools.

Pre-Compare & Pre-Deployment Scripts to SSDT

  • Posted on: 20 July 2015
  • By: Ed Elliott

When you use SSDT to deploy database changes you can include a pre/post deploy script which is run after the dacpac and database has been deployed. The process looks like:

  • 1. sqlpackage.exe compares dacpac to database
  • 2. sqlpackage.exe generates a deploy script
  • 3. sqlpackage.exe runs pre-deployment script
  • 4. sqlpackage.exe runs deploy script (generated in step 2)
  • 5. sqlpackage.exe runs post-deploy script

This happens whether you get sqlpackage.exe to do the deployment or whether you generate a script for later deployment.

If you generate a script for later deployment what you end up with is one long script with the pre-deploy script, the deploy script and the post-deploy script all in one long script.

There are times when you want the pre-deploy script to run BEFORE the compare happens. For SSDT to allow a pre-compare script, I believe it would cause quite a bit of confusion as the script would need to be run before things like getting a change report or generating a script for later deployment but what if the pre-compare script contains something that shouldn't be run until the deployment time? Best case secenario is confusion, worst case is deploying something at the wrong time, forget about cases where deploy scripts are generated on a mirrored copy of production!

Instead what we should do is build our own pre/post sqlpackage.exe scripts so the CI process would be something like:

  • 0. CI build runs pre-sqlpackage.exe script
  • 1. sqlpackage.exe compares dacpac to database
  • 2. sqlpackage.exe generates a deploy script
  • 3. sqlpackage.exe runs pre-deployment script
  • 4. sqlpackage.exe runs deploy script (generated in step 2)
  • 5. sqlpackage.exe runs post-deploy script
  • 6. CI build runs post-sqlpackage.exe script

Doing it this way we know what and when each script will be run and in which cases it is fine and where it should definetley not happen.

To fit this into SSDT we can keep the script in the project and set the "Build Action" to none and the "Copy to Output Directory" to "Copy Always" or "Copy if Newer", this will mean that the script is not in the dacpac but it is in the output directory so you can move the file(s) with the dacpac and run using whatever you run sqlpackage.exe with.

I did play around with getting the script into the dacpac it is hard to get it into the dacpac and hard to get it out using the built in tools, you could use the .net packaging api https://msdn.microsoft.com/en-us/library/system.io.packaging(v=vs.110).aspx but I do not know if sqlpackage gets upset at extra files in the dacpac.

set script properties to include in output folder

Run SQL Server In A Container Today July 2015

  • Posted on: 7 July 2015
  • By: Ed Elliott

Containers are one of the big things at the moment, Microsoft has recently announced that they will be bringing official support to Windows. While this is great news it is unlikely to be something most IT departments can use in the near term mainly because it takes time for everyone to get up to date with the latest version of windows, which in this case, is still to be written.

Containers provide a number of real benefits and while there is not yet official support for them it does not mean that we cannot use them to help us, especially in development and test environments. We could also potentially use containers in other cases such as where support has ended and they are not officially supported anyway but each organisation will have different criteria.

What are the use cases for SQL Server and containers today?

Containers let you run multiple copies of processes in their own sandbox so they cannot affect other processes but they do not have the same security boundary that a process has running on its own physical or virtual machine. I think until Microsoft provides official support we should just rule out, in the most part, any production instances of SQL Server. This for now at least, makes the discussion about using containers without official support easier.

Continuous Integration

This is where I started looking at running SQL Server in a container. What we can do is for every build that runs, spin up a new instance of SQL Server with a copy of the target database or databases in the state of the last production deployment, deploy the new code, run the tests, gather the results and destroy the new instance of SQL Server.

This sounds a bit wasteful but it is actually really useful in a number of ways:

  • If we use the compare/deploy method of deploying our code we run the same upgrade that we would run in production instead of either restoring a backup before every build or in the CI environment running incremental changes but larger bulk changes in production. I would much rather test the changes from the CI environment upwards.
  • We do not need a separate SQL instance or set of databases for each consecutive build
  • If there is code that makes cross database calls or writes to system databases like msdb or master, every build gets its own instance so again you do not need to have multiple instances hanging around to do consecutive builds
  • If you do not control the build environment such as Visual Studio online, you can spin up your own database instance on the build server and use that to test your code

Development Instances

Each developer should have their own instance of SQL Server. If you work in an environment where you have different versions of SQL Server or different settings (server collation!) for each application then having lots of different versions of SQL Server installed can be a pain. I understand that installs can be scripted but it isn't a great experience.

Instead if you use containers a developer or tester could simply start whichever version of SQL Server they needed for whatever they were working on at the time and not have to worry about whether they had SQL 2008 R2 with binary collation etc.

It also helps with the "Joel on software" test, can you checkout and build in one step? Yes but it takes an hour to get SQL Server installed to test it!

Relocating instances

When you run a container from an image you do not need to install anything so if you wanted to move to a different O/S you could pick it up and move it. That seems pretty cool and I can see this being used more when production instances are supported.

Recreating complex environments

If you have a test environment that includes replication and other fun things then you may need a number of instances to re-create a test environment, using containers you can run them on a single host but still re-create the actual environment. Think about this if you have instance A that replicates data to instance B, both of them are the default instance and you have apps hard coded to connect to the server ".", you also connect to a database with a hard coded name - typically you would have to use two different machines to reproduce this but using containers you can start them both and make them both think they are listening on port 1433, w o w, sorry let me repeat that in capital letters W O W!

Running apps without having to use the installer

There are lots of apps you can run directly via without having to install them, think of Chocolatey without the install step. Start chrome, Firefox, IE, MySQL, SQL Server, anything. You can run them without having to install them which is really useful for just about everyone.

Can we use containers on Windows today?

Yes, but lets look at the technology first and how it can be applied to run applications within containers on windows.

What is a process?

Code that runs on a computer, on Windows a process:

  • Reads and writes files
  • Reads and writes to the registry (which is actually stored in a file)
  • Makes Win32 API calls - whether the app is .net or whatever is irrelevant, under the hood Win32 APIs are called on Windows (forget that kernel drivers call the nt subsystem, we are talking apps here)
  • Accepts and creates network connections and reads and writes data from them

Whether your application is notepad or SQL Server, in this context, they do the same thing.

What is an image

An image is the installed state of an application stored in a reusable format. On Windows they are often created by taking a snapshot of the files and registry, installing an application and then taking a further snapshot to see what has changed. The changes are then gathered up and made available as an image that can be downloaded and run on another machine.

What is a container?

A container is where an image or set of images are downloaded onto the machine running the container and the applications inside the image are run. Typically when they run they believe that they have access to the entire machine and are situated in "\Program Files\Whatever" but actually they are normally in a users temporary folder. It is this deception that allows containers to be kept within their sandbox. To enable this sandbox approach to running images, we can use app virtualization technology.

What is App Virtualization?

App Virtualization is where you run a process and redirect file, registry and other API calls to somewhere else so an application may make a call to the Win32 API "CreateFile" and ask to read the "c:\program files\microsoft sql server\120\MSSQLSERVER\binn\sqlservr.exe" file but actually the path is changed to "c:\Users\ed\AppData\Blah\sqlservr.exe" - this means that using the image that was created on another machine I can run SQL Server without ever having to install it - or run it alongside another version whether they normally read and write to the same locations or not.

There are a number of app virtualization providers on the market including Microsoft, Spoon and Symantec. To see a fuller list of vendors see "Vendor-specific implementations" on http://en.wikipedia.org/wiki/Application_streaming.

Using Spoon

So Spoon.net has been been operating since the mid-2000's previously as Xenocode and where they had the virtualization technology to run processes in sandboxes and their Spoon Studio software which creates the images they have now added the ability to publish images and store them in a private or public repository which can be run on any windows machine and it is this extra support which allows us to use containers on windows today.

This is beginning to sound like marketing, do you work for Spoon?

No, I don't and I am not affiliated with Spoon in any way. I have used their products before and started discussing how we could use app virtualization to run SQL Server in containers with the CEO on twitter, this by the way is one of the reasons I love twitter! I tested it out and like it, I wouldn't recommend something unless

  • it was something everyone could afford - plans start at free
  • it fundamentally worked - tick
  • their support is responsive and resolved any issues - I had one issue which has been resolved quickly, I am actually glad I had an issue as it is only when you use support you find out what they are like. I found them to be polite knowledgeable and interested in fixing my problem.

Sounds expensive

No, a basic plan which includes everything you need to start creating public images and running those is free. To have a private repository of images you need to pay and the cost is more expensive to self host your repository but it really isn't a lot and if it means you can run less Windows instances then you are more than likely going to save money on licenses.

Enough talk, lets see this in action

I like your style.

To use the Spoon containers you will need to get a Spoon.net account, go to:

https://Spoon.net/

Click the "Try Free" button to create an account, no credit card needed etc.

When you have created an account make sure you are on the "Dashboard" page and click on "New" and then "Spoon Shell", this will download the Spoon shell which you can install. This is a small application that lets you control the images and containers, if you want to run a container as part of a CI process you do not need to install it but it makes our demo easier for now.

Screen shot showing the button to start a new Spoon Shell

If you go ahead and install the "spoon-plugin.exe" which will let you run Spoon commands from a cmd window. You should install it on a machine, real or virtual that you want to start a new instance of SQL Server on.

Note, that after the Spoon shell is installed, we will download the express edition of SQL Server 2014 and start it on your machine, if you have policies against running strange code on your machine run it on a VM or somewhere you are allowed to. If you already have an instance of SQL Server on your machine, fear not this will start a new instance and use the Spoon app virtualization technology to make sure they exist side by side.

When the shell is installed we simply need to create a new container based on an existing image, spoon have already created images of SQL express 2012 and 2014 either with or without SSMS included. There is nothing stopping you from creating your own images so if you need SQL 2000 Enterprise edition you can do that but you should keep it rather than sharing the image due to the way SQL is licensed.

If you now start a cmd.exe prompt, there is no need to create an elevated prompt we can create a container from the SQL express image. In the cmd window, use the spoon command:

spoon run --route-block=tcp spoon try sqlserver/sqlserver2014-express

This will download the image onto your machine it will be cached so the first time an image is started it will take as long as it takes to download ~1 gb over the internet, subsequent starts will be significantly quicker. Spoon are working on a new system that will significantly reduce download times but it is not available yet.

This will download the SQL express image, start the sqlservr.exe service and also start a version of cmd that can “see into” the container, if you run sqlcmd (sqlcmd.exe -S 127.0.0.1,1433) you can run the normal SQL Server commands. Do something cool like "select @@servername; GO" it will literally knock your socks off!

OK so I have a new SQL Server instance, now what?

Now you have the plug-in installed you can do all sorts of cool things, if you use sqlcmd.exe to make some changes, create a database for instance then from a cmd window do "spoon containers", you should see the container listed so do "spoon stop [containerid]" - this should stop the instance, do "spoon start [containerid]" you will see that your new database still exists. Now let’s pretend that we want another instance of SQL so do, "spoon run sqlserver/sqlserver2014-express " look, two separate instances which both think they are listening on the same port, two instances of sqlcmd which can both connect to their own instances which are completely independent:

Run multiple instances of SQL Server in a container on windows using spoon

Wait WAT?

Have a play around, to start and stop containers use the commands "spoon start" and "spoon stop" when you want to throw your containers away just do "spoon rm --all" to throw them all or "spoon rm [containerid]" to just get rid of one.

Additionally, instead of "spoon run" you can do "spoon try" which creates a temporary container where you don't need to do a "spoon rm" to get rid of it, just shutdown SQL Server (sqlcmd.exe --> “shutdown” is the easiest) or "spoon stop" on the container id.

Summary

If you followed the quick demo and got a SQL Server instance running on your machine, then well done but this is literally only the surface of what Spoon can do.

In summary:

  • Run images of SQL Server wherever (windows) and whenever you want = cool
  • Really useful for development and test
  • Free to use public images, although if you use it you will want to have private repositories
  • Running things you download of strangers off the internet is risky, do what you will with that warning
  • My use of Spoon is as a SQL Server developer but spoon isn’t limited it SQL Server it works with almost every application even your own in-house apps and services

    Approaches to deploying Sql Server database code

    • Posted on: 27 April 2015
    • By: Ed Elliott

    When you deploy your Sql Server code, there are two approaches, there is the manual way and the automated "compare & deploy" way.

    Manual

    The manual way is where you track what changes you have made and when you want to release, build a script or set of scripts which contain the updates. If you are modifying objects then you should use alter or drop/create and you need to manually create rollback scripts (should you need them).

    I don't like the manual method because:

    • If you work in a team, someone needs to co-ordinate what changes are being deployed
    • It is really easy to forget something and not include it in the deployment
    • The change scripts for each deployment are stored and to get to a particular point in time, you need to apply the base scripts and then all the updates
    • Manual = Error prone, I have no statistics to show you but it is a fact, believe it

    I do like the manual method because sometimes you want to do something that a tool cannot, for instance you want to archive data so you can do things like migrating it in chunks or in smaller transactions. You might have a better way of adding a table or be better at working out what operations can be done on or offline such as changing some columns metadata.

    Compare / Merge

    I like the compare merge method because:

    • I can work in a team, we can all check our code in and a tool works out what code to deploy.
    • No one has to gather the changes and can forget to include something, if it is checked in then it goes in the deploy
    • I can include whatever manual scripts I like because I "build the deployment pipeline and I decide"
    • I can use the same system to generate rollback scripts automatically
    • I trust the tools I use (sqlpackage.exe or sqlcompare.exe) and they are not error prone
    • It is really easy to get a database into the current state, like 1 step setup easy

    I don't like the compare merge method because I sometimes have to think about changes and manually script them.

    Manual is Sad Face Automated is a Happy Face