Image
{{'2017-06-13T12:56:00.0000000' | utcToLocalDate }}
Richie Lee

Keeping The Database Dev Ops Overhead Lightweight

Hello!

tt_300_thumb_710BE775

One very important aspect of Dev Ops that is perhaps over-looked is the overhead that comes with adopting Dev Ops practices. To help explain what I mean, let’s break that sentence down a bit.

What Do I Mean By “Dev Ops Practices”

I have a strong suspicion that for each of the posts for this T-SQL Tuesday on Database Dev Ops, everyone will have a slightly different take. Or rather, they are going to articulate what Dev Ops means to them. And so here is my take: broadly speaking, Dev Ops is about increasing the cadence of a feature getting from a story board into production, through the practices of Continuous Integration and Continuous Deployment. There’s more to this, but the aim of automating this process is to mitigate the amount of errors that occur along the way, be they deployment issues or bugs in the code.

What Do I Mean By “Overhead”?

If you are deploying database changes manually then chances are you are probably doing it in one of several ways, including

  • You have a change script that is created manually and then handed over to someone to execute.
  • Or even if you use something like SSDT you could manually create the script by getting the dacpac from your local machine and executing “sqlpackage.exe” on the prod box, using the dacpac as source.
  • Or, you could even have a CI process to create a dacpac, but the step to create the diff script is manual

The options (and indeed the deviations) are endless. Point is that if there’s not much tooling used then the effort to keep this software up-to-date is practically non-existent. This sounds like a good thing, but it probably isn’t because if no two deployments are ever the same, then you’re not testing your deployment process, and deployments issues are more likely to occur when it is largely manual.

Let’s contrast that with an automated process at a client: a user checks in code to a VCS, which triggers a build in TeamCity. The database solutions are SSDT, so SSDT needs to be on the build boxes. If the build is successful then a Nuget package is created, containing all the dacpac/deploy scripts and pushed to a Nuget repository. The build finally creates a release basedo nthe corresponding build numberin Octopus, which will pull down the Nuget packages from the Nuget repo. Octopus will then deploy the changes using Tentacles that are not on the boxes to deploy themselves, but rather are a shared resource across the entire estate.

So in this scenario, we have the following aps that need to be up and running for this to work:

  • a VCS
  • TeamCity
  • Nuget repo
  • Octopus Deploy

In addition, we also require SSDT installed on both TeamCity agents and the Octopus Tentacles. This is what I mean by overhead: servers need to be patched, software needs to be updated, and software that supports the build/deploy process needs to be present as well as matching the version that the developers have on their boxes. This last point is the real stickler: TeamCity agent licencing is not cheap, so an organisation may want to share agents across teams. And so if teams are on different SSDT/Visual Studio features (because you need some sort of Visual Studio installed) then this patching/updating becomes a nightmare. And frankly it’s unfeasible.

This is Where Nuget Comes In

The SSDT team worked hard in providing all the libraries you need in one Nuget package to help build and deploy SSDT-based projects. This remove the need to install Visual Studio/SSDT on a build agent, and also makes targeting versions across build/deploy boxes far easier. it also requires the dev team to know which version of SSDT they require: the Ops side of Dev Ops are not concerned what software is required by the dev team to build/deploy successfully. It’s available on Nuget. The only pre-requisite is that MSBuild Tools 2015 is installed on the box. Which isn’t great, and there’s Connect Issue raised by me to try to get this resolved.

Below is a PowerShell script that will download both Nuget and headless SSDT. The other two are for compiling and publishing the databases using the Nuget package.

The database I am using in these samples is on GitHub here.

The other benefit for using Nuget packages over installing software is rolling back: how much effort is it to rollback and Update to Visual Studio? Now compare that to rolling back a Nuget version number. Not only is it quicker, but there is no outage, and it doesn’t affect any other team. This is all good things.

So if we can reduce the amount of software we need to install on build/deploy boxes, we reduce the amount of software that has to be tracked for patching. In addition, by using Nuget packages we can empower dev teams to upgrade/roll back the software with no issues.

comments powered by Disqus